00:00:00.000 Started by upstream project "autotest-per-patch" build number 132539 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.046 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.057 Using shallow fetch with depth 1 00:00:00.057 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.057 > git --version # timeout=10 00:00:00.072 > git --version # 'git version 2.39.2' 00:00:00.072 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.092 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.092 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.175 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.186 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.196 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.196 > git config core.sparsecheckout # timeout=10 00:00:02.205 > git read-tree -mu HEAD # timeout=10 00:00:02.219 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.239 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.239 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.349 [Pipeline] Start of Pipeline 00:00:02.362 [Pipeline] library 00:00:02.364 Loading library shm_lib@master 00:00:02.364 Library shm_lib@master is cached. Copying from home. 00:00:02.385 [Pipeline] node 00:00:02.398 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.399 [Pipeline] { 00:00:02.411 [Pipeline] catchError 00:00:02.413 [Pipeline] { 00:00:02.429 [Pipeline] wrap 00:00:02.437 [Pipeline] { 00:00:02.445 [Pipeline] stage 00:00:02.449 [Pipeline] { (Prologue) 00:00:02.657 [Pipeline] sh 00:00:02.941 + logger -p user.info -t JENKINS-CI 00:00:02.955 [Pipeline] echo 00:00:02.957 Node: WFP20 00:00:02.962 [Pipeline] sh 00:00:03.258 [Pipeline] setCustomBuildProperty 00:00:03.272 [Pipeline] echo 00:00:03.274 Cleanup processes 00:00:03.280 [Pipeline] sh 00:00:03.564 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.564 991760 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.577 [Pipeline] sh 00:00:03.862 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.863 ++ grep -v 'sudo pgrep' 00:00:03.863 ++ awk '{print $1}' 00:00:03.863 + sudo kill -9 00:00:03.863 + true 00:00:03.878 [Pipeline] cleanWs 00:00:03.888 [WS-CLEANUP] Deleting project workspace... 00:00:03.888 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.895 [WS-CLEANUP] done 00:00:03.899 [Pipeline] setCustomBuildProperty 00:00:03.914 [Pipeline] sh 00:00:04.196 + sudo git config --global --replace-all safe.directory '*' 00:00:04.272 [Pipeline] httpRequest 00:00:04.632 [Pipeline] echo 00:00:04.633 Sorcerer 10.211.164.101 is alive 00:00:04.642 [Pipeline] retry 00:00:04.645 [Pipeline] { 00:00:04.661 [Pipeline] httpRequest 00:00:04.665 HttpMethod: GET 00:00:04.666 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.666 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.667 Response Code: HTTP/1.1 200 OK 00:00:04.667 Success: Status code 200 is in the accepted range: 200,404 00:00:04.668 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.956 [Pipeline] } 00:00:04.970 [Pipeline] // retry 00:00:04.977 [Pipeline] sh 00:00:05.258 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.273 [Pipeline] httpRequest 00:00:08.296 [Pipeline] echo 00:00:08.299 Sorcerer 10.211.164.101 is dead 00:00:08.310 [Pipeline] httpRequest 00:00:11.329 [Pipeline] echo 00:00:11.331 Sorcerer 10.211.164.101 is dead 00:00:11.343 [Pipeline] httpRequest 00:00:13.318 [Pipeline] echo 00:00:13.320 Sorcerer 10.211.164.101 is alive 00:00:13.329 [Pipeline] retry 00:00:13.331 [Pipeline] { 00:00:13.344 [Pipeline] httpRequest 00:00:13.348 HttpMethod: GET 00:00:13.349 URL: http://10.211.164.101/packages/spdk_f5304d6615c96a2153ff252f43bf19fa1f4ba13c.tar.gz 00:00:13.349 Sending request to url: http://10.211.164.101/packages/spdk_f5304d6615c96a2153ff252f43bf19fa1f4ba13c.tar.gz 00:00:13.358 Response Code: HTTP/1.1 200 OK 00:00:13.359 Success: Status code 200 is in the accepted range: 200,404 00:00:13.359 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_f5304d6615c96a2153ff252f43bf19fa1f4ba13c.tar.gz 00:00:26.248 [Pipeline] } 00:00:26.266 [Pipeline] // retry 00:00:26.274 [Pipeline] sh 00:00:26.555 + tar --no-same-owner -xf spdk_f5304d6615c96a2153ff252f43bf19fa1f4ba13c.tar.gz 00:00:29.097 [Pipeline] sh 00:00:29.376 + git -C spdk log --oneline -n5 00:00:29.376 f5304d661 bdev/malloc: Fix unexpected DIF verification error for initial read 00:00:29.376 baa2dd0a5 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:29.376 a91d250fa bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:29.376 ff173863b ut/bdev: Remove duplication with many stups among unit test files 00:00:29.376 658cb4c04 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:29.387 [Pipeline] } 00:00:29.403 [Pipeline] // stage 00:00:29.412 [Pipeline] stage 00:00:29.414 [Pipeline] { (Prepare) 00:00:29.431 [Pipeline] writeFile 00:00:29.459 [Pipeline] sh 00:00:29.739 + logger -p user.info -t JENKINS-CI 00:00:29.750 [Pipeline] sh 00:00:30.029 + logger -p user.info -t JENKINS-CI 00:00:30.041 [Pipeline] sh 00:00:30.321 + cat autorun-spdk.conf 00:00:30.321 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.321 SPDK_TEST_FUZZER_SHORT=1 00:00:30.321 SPDK_TEST_FUZZER=1 00:00:30.321 SPDK_TEST_SETUP=1 00:00:30.321 SPDK_RUN_UBSAN=1 00:00:30.327 RUN_NIGHTLY=0 00:00:30.331 [Pipeline] readFile 00:00:30.355 [Pipeline] withEnv 00:00:30.357 [Pipeline] { 00:00:30.369 [Pipeline] sh 00:00:30.648 + set -ex 00:00:30.648 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:30.648 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:30.648 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.648 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:30.648 ++ SPDK_TEST_FUZZER=1 00:00:30.648 ++ SPDK_TEST_SETUP=1 00:00:30.648 ++ SPDK_RUN_UBSAN=1 00:00:30.648 ++ RUN_NIGHTLY=0 00:00:30.648 + case $SPDK_TEST_NVMF_NICS in 00:00:30.648 + DRIVERS= 00:00:30.648 + [[ -n '' ]] 00:00:30.648 + exit 0 00:00:30.657 [Pipeline] } 00:00:30.676 [Pipeline] // withEnv 00:00:30.681 [Pipeline] } 00:00:30.695 [Pipeline] // stage 00:00:30.706 [Pipeline] catchError 00:00:30.708 [Pipeline] { 00:00:30.719 [Pipeline] timeout 00:00:30.719 Timeout set to expire in 30 min 00:00:30.721 [Pipeline] { 00:00:30.735 [Pipeline] stage 00:00:30.737 [Pipeline] { (Tests) 00:00:30.753 [Pipeline] sh 00:00:31.032 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:31.032 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:31.032 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:31.032 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:31.032 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:31.032 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:31.032 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:31.032 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:31.032 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:31.032 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:31.032 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:31.032 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:31.032 + source /etc/os-release 00:00:31.032 ++ NAME='Fedora Linux' 00:00:31.032 ++ VERSION='39 (Cloud Edition)' 00:00:31.032 ++ ID=fedora 00:00:31.032 ++ VERSION_ID=39 00:00:31.032 ++ VERSION_CODENAME= 00:00:31.032 ++ PLATFORM_ID=platform:f39 00:00:31.032 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:31.032 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:31.032 ++ LOGO=fedora-logo-icon 00:00:31.032 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:31.032 ++ HOME_URL=https://fedoraproject.org/ 00:00:31.032 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:31.032 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:31.032 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:31.032 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:31.032 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:31.032 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:31.032 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:31.032 ++ SUPPORT_END=2024-11-12 00:00:31.032 ++ VARIANT='Cloud Edition' 00:00:31.032 ++ VARIANT_ID=cloud 00:00:31.032 + uname -a 00:00:31.032 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:31.032 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:34.317 Hugepages 00:00:34.317 node hugesize free / total 00:00:34.318 node0 1048576kB 0 / 0 00:00:34.318 node0 2048kB 0 / 0 00:00:34.318 node1 1048576kB 0 / 0 00:00:34.318 node1 2048kB 0 / 0 00:00:34.318 00:00:34.318 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:34.318 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:34.318 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:34.318 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:34.318 + rm -f /tmp/spdk-ld-path 00:00:34.318 + source autorun-spdk.conf 00:00:34.318 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.318 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:34.318 ++ SPDK_TEST_FUZZER=1 00:00:34.318 ++ SPDK_TEST_SETUP=1 00:00:34.318 ++ SPDK_RUN_UBSAN=1 00:00:34.318 ++ RUN_NIGHTLY=0 00:00:34.318 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:34.318 + [[ -n '' ]] 00:00:34.318 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:34.318 + for M in /var/spdk/build-*-manifest.txt 00:00:34.318 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:34.318 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:34.318 + for M in /var/spdk/build-*-manifest.txt 00:00:34.318 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:34.318 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:34.318 + for M in /var/spdk/build-*-manifest.txt 00:00:34.318 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:34.318 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:34.318 ++ uname 00:00:34.318 + [[ Linux == \L\i\n\u\x ]] 00:00:34.318 + sudo dmesg -T 00:00:34.318 + sudo dmesg --clear 00:00:34.318 + dmesg_pid=993217 00:00:34.318 + [[ Fedora Linux == FreeBSD ]] 00:00:34.318 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.318 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.318 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.318 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.318 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.318 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.318 + sudo dmesg -Tw 00:00:34.318 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.318 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.318 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.318 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.318 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.318 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.318 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.318 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.318 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:34.318 19:27:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:34.318 19:27:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:00:34.318 19:27:09 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:00:34.318 19:27:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:34.318 19:27:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:34.318 19:27:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:34.318 19:27:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:34.318 19:27:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:34.318 19:27:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.318 19:27:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.318 19:27:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.318 19:27:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.318 19:27:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.318 19:27:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.318 19:27:09 -- paths/export.sh@5 -- $ export PATH 00:00:34.318 19:27:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.318 19:27:09 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:34.318 19:27:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:34.318 19:27:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732645629.XXXXXX 00:00:34.577 19:27:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732645629.4qRdvI 00:00:34.577 19:27:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:34.577 19:27:09 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:34.577 19:27:09 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:34.577 19:27:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.577 19:27:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.577 19:27:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:34.577 19:27:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:34.577 19:27:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.577 19:27:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.577 19:27:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:34.577 19:27:09 -- pm/common@17 -- $ local monitor 00:00:34.577 19:27:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.577 19:27:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.577 19:27:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.577 19:27:09 -- pm/common@21 -- $ date +%s 00:00:34.577 19:27:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.577 19:27:09 -- pm/common@21 -- $ date +%s 00:00:34.577 19:27:09 -- pm/common@25 -- $ sleep 1 00:00:34.577 19:27:09 -- pm/common@21 -- $ date +%s 00:00:34.577 19:27:09 -- pm/common@21 -- $ date +%s 00:00:34.577 19:27:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732645629 00:00:34.577 19:27:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732645629 00:00:34.577 19:27:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732645629 00:00:34.577 19:27:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732645629 00:00:34.577 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732645629_collect-cpu-load.pm.log 00:00:34.577 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732645629_collect-vmstat.pm.log 00:00:34.577 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732645629_collect-cpu-temp.pm.log 00:00:34.577 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732645629_collect-bmc-pm.bmc.pm.log 00:00:35.514 19:27:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:35.514 19:27:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:35.514 19:27:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:35.514 19:27:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:35.514 19:27:10 -- spdk/autobuild.sh@16 -- $ date -u 00:00:35.514 Tue Nov 26 06:27:10 PM UTC 2024 00:00:35.514 19:27:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:35.514 v25.01-pre-255-gf5304d661 00:00:35.514 19:27:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:35.514 19:27:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:35.514 19:27:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:35.514 19:27:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:35.514 19:27:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:35.514 19:27:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.514 ************************************ 00:00:35.514 START TEST ubsan 00:00:35.514 ************************************ 00:00:35.514 19:27:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:35.514 using ubsan 00:00:35.514 00:00:35.514 real 0m0.001s 00:00:35.514 user 0m0.001s 00:00:35.514 sys 0m0.000s 00:00:35.514 19:27:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:35.514 19:27:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:35.514 ************************************ 00:00:35.514 END TEST ubsan 00:00:35.514 ************************************ 00:00:35.514 19:27:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:35.514 19:27:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:35.514 19:27:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:35.514 19:27:10 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:35.514 19:27:10 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:35.514 19:27:10 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:35.514 19:27:10 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:00:35.514 19:27:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:35.514 19:27:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.514 ************************************ 00:00:35.514 START TEST autobuild_llvm_precompile 00:00:35.514 ************************************ 00:00:35.514 19:27:10 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:00:35.514 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:35.773 Target: x86_64-redhat-linux-gnu 00:00:35.773 Thread model: posix 00:00:35.773 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:35.773 19:27:10 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:36.031 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:36.031 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:36.290 Using 'verbs' RDMA provider 00:00:52.134 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:04.346 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:04.916 Creating mk/config.mk...done. 00:01:04.916 Creating mk/cc.flags.mk...done. 00:01:04.916 Type 'make' to build. 00:01:04.916 00:01:04.916 real 0m29.312s 00:01:04.916 user 0m12.904s 00:01:04.916 sys 0m15.811s 00:01:04.916 19:27:40 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:04.916 19:27:40 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:04.916 ************************************ 00:01:04.916 END TEST autobuild_llvm_precompile 00:01:04.916 ************************************ 00:01:04.916 19:27:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.916 19:27:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.916 19:27:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.916 19:27:40 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:04.916 19:27:40 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:05.176 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:05.176 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:05.744 Using 'verbs' RDMA provider 00:01:18.895 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.103 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.103 Creating mk/config.mk...done. 00:01:31.103 Creating mk/cc.flags.mk...done. 00:01:31.103 Type 'make' to build. 00:01:31.103 19:28:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:31.103 19:28:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.103 19:28:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.103 19:28:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.103 ************************************ 00:01:31.103 START TEST make 00:01:31.103 ************************************ 00:01:31.103 19:28:05 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:31.103 make[1]: Nothing to be done for 'all'. 00:01:32.041 The Meson build system 00:01:32.041 Version: 1.5.0 00:01:32.041 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:32.041 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.041 Build type: native build 00:01:32.041 Project name: libvfio-user 00:01:32.041 Project version: 0.0.1 00:01:32.041 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:32.041 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:32.041 Host machine cpu family: x86_64 00:01:32.041 Host machine cpu: x86_64 00:01:32.041 Run-time dependency threads found: YES 00:01:32.041 Library dl found: YES 00:01:32.041 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:32.041 Run-time dependency json-c found: YES 0.17 00:01:32.041 Run-time dependency cmocka found: YES 1.1.7 00:01:32.041 Program pytest-3 found: NO 00:01:32.041 Program flake8 found: NO 00:01:32.041 Program misspell-fixer found: NO 00:01:32.041 Program restructuredtext-lint found: NO 00:01:32.041 Program valgrind found: YES (/usr/bin/valgrind) 00:01:32.041 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.041 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.041 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.041 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.041 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:32.041 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:32.041 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.041 Build targets in project: 8 00:01:32.041 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:32.041 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:32.041 00:01:32.041 libvfio-user 0.0.1 00:01:32.041 00:01:32.041 User defined options 00:01:32.041 buildtype : debug 00:01:32.041 default_library: static 00:01:32.041 libdir : /usr/local/lib 00:01:32.041 00:01:32.041 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.301 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.301 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.301 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.301 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:32.301 [4/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:32.301 [5/36] Compiling C object samples/null.p/null.c.o 00:01:32.301 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:32.301 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:32.301 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:32.301 [9/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:32.301 [10/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.301 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:32.301 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:32.301 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:32.301 [14/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:32.301 [15/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:32.301 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:32.301 [17/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:32.301 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:32.301 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:32.301 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:32.301 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:32.301 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:32.301 [23/36] Compiling C object samples/server.p/server.c.o 00:01:32.301 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:32.301 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:32.301 [26/36] Compiling C object samples/client.p/client.c.o 00:01:32.301 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:32.301 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.301 [29/36] Linking static target lib/libvfio-user.a 00:01:32.301 [30/36] Linking target samples/client 00:01:32.301 [31/36] Linking target test/unit_tests 00:01:32.301 [32/36] Linking target samples/gpio-pci-idio-16 00:01:32.560 [33/36] Linking target samples/null 00:01:32.560 [34/36] Linking target samples/lspci 00:01:32.560 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:32.560 [36/36] Linking target samples/server 00:01:32.560 INFO: autodetecting backend as ninja 00:01:32.560 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.560 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.819 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.820 ninja: no work to do. 00:01:38.200 The Meson build system 00:01:38.200 Version: 1.5.0 00:01:38.200 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:38.200 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:38.200 Build type: native build 00:01:38.200 Program cat found: YES (/usr/bin/cat) 00:01:38.200 Project name: DPDK 00:01:38.200 Project version: 24.03.0 00:01:38.200 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:38.200 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:38.200 Host machine cpu family: x86_64 00:01:38.200 Host machine cpu: x86_64 00:01:38.200 Message: ## Building in Developer Mode ## 00:01:38.200 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.200 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:38.200 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.200 Program python3 found: YES (/usr/bin/python3) 00:01:38.200 Program cat found: YES (/usr/bin/cat) 00:01:38.200 Compiler for C supports arguments -march=native: YES 00:01:38.200 Checking for size of "void *" : 8 00:01:38.200 Checking for size of "void *" : 8 (cached) 00:01:38.200 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:38.200 Library m found: YES 00:01:38.200 Library numa found: YES 00:01:38.200 Has header "numaif.h" : YES 00:01:38.200 Library fdt found: NO 00:01:38.200 Library execinfo found: NO 00:01:38.200 Has header "execinfo.h" : YES 00:01:38.200 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.200 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.200 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.200 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.200 Run-time dependency openssl found: YES 3.1.1 00:01:38.200 Run-time dependency libpcap found: YES 1.10.4 00:01:38.200 Has header "pcap.h" with dependency libpcap: YES 00:01:38.200 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.200 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.200 Compiler for C supports arguments -Wformat: YES 00:01:38.200 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:38.200 Compiler for C supports arguments -Wformat-security: YES 00:01:38.200 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.200 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.200 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.200 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.200 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.200 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.200 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.200 Compiler for C supports arguments -Wundef: YES 00:01:38.200 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.200 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.200 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:38.200 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.200 Program objdump found: YES (/usr/bin/objdump) 00:01:38.200 Compiler for C supports arguments -mavx512f: YES 00:01:38.200 Checking if "AVX512 checking" compiles: YES 00:01:38.200 Fetching value of define "__SSE4_2__" : 1 00:01:38.200 Fetching value of define "__AES__" : 1 00:01:38.200 Fetching value of define "__AVX__" : 1 00:01:38.200 Fetching value of define "__AVX2__" : 1 00:01:38.200 Fetching value of define "__AVX512BW__" : 1 00:01:38.200 Fetching value of define "__AVX512CD__" : 1 00:01:38.200 Fetching value of define "__AVX512DQ__" : 1 00:01:38.200 Fetching value of define "__AVX512F__" : 1 00:01:38.200 Fetching value of define "__AVX512VL__" : 1 00:01:38.200 Fetching value of define "__PCLMUL__" : 1 00:01:38.200 Fetching value of define "__RDRND__" : 1 00:01:38.200 Fetching value of define "__RDSEED__" : 1 00:01:38.200 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:38.200 Fetching value of define "__znver1__" : (undefined) 00:01:38.200 Fetching value of define "__znver2__" : (undefined) 00:01:38.200 Fetching value of define "__znver3__" : (undefined) 00:01:38.200 Fetching value of define "__znver4__" : (undefined) 00:01:38.200 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:38.200 Message: lib/log: Defining dependency "log" 00:01:38.200 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.200 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.200 Checking for function "getentropy" : NO 00:01:38.200 Message: lib/eal: Defining dependency "eal" 00:01:38.200 Message: lib/ring: Defining dependency "ring" 00:01:38.200 Message: lib/rcu: Defining dependency "rcu" 00:01:38.200 Message: lib/mempool: Defining dependency "mempool" 00:01:38.200 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.200 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.200 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.200 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.200 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.200 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.200 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:38.200 Compiler for C supports arguments -mpclmul: YES 00:01:38.200 Compiler for C supports arguments -maes: YES 00:01:38.200 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.200 Compiler for C supports arguments -mavx512bw: YES 00:01:38.200 Compiler for C supports arguments -mavx512dq: YES 00:01:38.200 Compiler for C supports arguments -mavx512vl: YES 00:01:38.200 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.200 Compiler for C supports arguments -mavx2: YES 00:01:38.200 Compiler for C supports arguments -mavx: YES 00:01:38.200 Message: lib/net: Defining dependency "net" 00:01:38.200 Message: lib/meter: Defining dependency "meter" 00:01:38.200 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.200 Message: lib/pci: Defining dependency "pci" 00:01:38.200 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.200 Message: lib/hash: Defining dependency "hash" 00:01:38.200 Message: lib/timer: Defining dependency "timer" 00:01:38.200 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.200 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.200 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.200 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.200 Message: lib/power: Defining dependency "power" 00:01:38.200 Message: lib/reorder: Defining dependency "reorder" 00:01:38.200 Message: lib/security: Defining dependency "security" 00:01:38.200 Has header "linux/userfaultfd.h" : YES 00:01:38.200 Has header "linux/vduse.h" : YES 00:01:38.200 Message: lib/vhost: Defining dependency "vhost" 00:01:38.200 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:38.200 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:38.200 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:38.200 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.200 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:38.200 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:38.200 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:38.200 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:38.200 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:38.200 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:38.200 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:38.200 Configuring doxy-api-html.conf using configuration 00:01:38.200 Configuring doxy-api-man.conf using configuration 00:01:38.200 Program mandb found: YES (/usr/bin/mandb) 00:01:38.200 Program sphinx-build found: NO 00:01:38.200 Configuring rte_build_config.h using configuration 00:01:38.200 Message: 00:01:38.200 ================= 00:01:38.200 Applications Enabled 00:01:38.200 ================= 00:01:38.200 00:01:38.200 apps: 00:01:38.200 00:01:38.200 00:01:38.200 Message: 00:01:38.200 ================= 00:01:38.200 Libraries Enabled 00:01:38.200 ================= 00:01:38.200 00:01:38.200 libs: 00:01:38.200 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:38.200 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:38.200 cryptodev, dmadev, power, reorder, security, vhost, 00:01:38.200 00:01:38.200 Message: 00:01:38.200 =============== 00:01:38.201 Drivers Enabled 00:01:38.201 =============== 00:01:38.201 00:01:38.201 common: 00:01:38.201 00:01:38.201 bus: 00:01:38.201 pci, vdev, 00:01:38.201 mempool: 00:01:38.201 ring, 00:01:38.201 dma: 00:01:38.201 00:01:38.201 net: 00:01:38.201 00:01:38.201 crypto: 00:01:38.201 00:01:38.201 compress: 00:01:38.201 00:01:38.201 vdpa: 00:01:38.201 00:01:38.201 00:01:38.201 Message: 00:01:38.201 ================= 00:01:38.201 Content Skipped 00:01:38.201 ================= 00:01:38.201 00:01:38.201 apps: 00:01:38.201 dumpcap: explicitly disabled via build config 00:01:38.201 graph: explicitly disabled via build config 00:01:38.201 pdump: explicitly disabled via build config 00:01:38.201 proc-info: explicitly disabled via build config 00:01:38.201 test-acl: explicitly disabled via build config 00:01:38.201 test-bbdev: explicitly disabled via build config 00:01:38.201 test-cmdline: explicitly disabled via build config 00:01:38.201 test-compress-perf: explicitly disabled via build config 00:01:38.201 test-crypto-perf: explicitly disabled via build config 00:01:38.201 test-dma-perf: explicitly disabled via build config 00:01:38.201 test-eventdev: explicitly disabled via build config 00:01:38.201 test-fib: explicitly disabled via build config 00:01:38.201 test-flow-perf: explicitly disabled via build config 00:01:38.201 test-gpudev: explicitly disabled via build config 00:01:38.201 test-mldev: explicitly disabled via build config 00:01:38.201 test-pipeline: explicitly disabled via build config 00:01:38.201 test-pmd: explicitly disabled via build config 00:01:38.201 test-regex: explicitly disabled via build config 00:01:38.201 test-sad: explicitly disabled via build config 00:01:38.201 test-security-perf: explicitly disabled via build config 00:01:38.201 00:01:38.201 libs: 00:01:38.201 argparse: explicitly disabled via build config 00:01:38.201 metrics: explicitly disabled via build config 00:01:38.201 acl: explicitly disabled via build config 00:01:38.201 bbdev: explicitly disabled via build config 00:01:38.201 bitratestats: explicitly disabled via build config 00:01:38.201 bpf: explicitly disabled via build config 00:01:38.201 cfgfile: explicitly disabled via build config 00:01:38.201 distributor: explicitly disabled via build config 00:01:38.201 efd: explicitly disabled via build config 00:01:38.201 eventdev: explicitly disabled via build config 00:01:38.201 dispatcher: explicitly disabled via build config 00:01:38.201 gpudev: explicitly disabled via build config 00:01:38.201 gro: explicitly disabled via build config 00:01:38.201 gso: explicitly disabled via build config 00:01:38.201 ip_frag: explicitly disabled via build config 00:01:38.201 jobstats: explicitly disabled via build config 00:01:38.201 latencystats: explicitly disabled via build config 00:01:38.201 lpm: explicitly disabled via build config 00:01:38.201 member: explicitly disabled via build config 00:01:38.201 pcapng: explicitly disabled via build config 00:01:38.201 rawdev: explicitly disabled via build config 00:01:38.201 regexdev: explicitly disabled via build config 00:01:38.201 mldev: explicitly disabled via build config 00:01:38.201 rib: explicitly disabled via build config 00:01:38.201 sched: explicitly disabled via build config 00:01:38.201 stack: explicitly disabled via build config 00:01:38.201 ipsec: explicitly disabled via build config 00:01:38.201 pdcp: explicitly disabled via build config 00:01:38.201 fib: explicitly disabled via build config 00:01:38.201 port: explicitly disabled via build config 00:01:38.201 pdump: explicitly disabled via build config 00:01:38.201 table: explicitly disabled via build config 00:01:38.201 pipeline: explicitly disabled via build config 00:01:38.201 graph: explicitly disabled via build config 00:01:38.201 node: explicitly disabled via build config 00:01:38.201 00:01:38.201 drivers: 00:01:38.201 common/cpt: not in enabled drivers build config 00:01:38.201 common/dpaax: not in enabled drivers build config 00:01:38.201 common/iavf: not in enabled drivers build config 00:01:38.201 common/idpf: not in enabled drivers build config 00:01:38.201 common/ionic: not in enabled drivers build config 00:01:38.201 common/mvep: not in enabled drivers build config 00:01:38.201 common/octeontx: not in enabled drivers build config 00:01:38.201 bus/auxiliary: not in enabled drivers build config 00:01:38.201 bus/cdx: not in enabled drivers build config 00:01:38.201 bus/dpaa: not in enabled drivers build config 00:01:38.201 bus/fslmc: not in enabled drivers build config 00:01:38.201 bus/ifpga: not in enabled drivers build config 00:01:38.201 bus/platform: not in enabled drivers build config 00:01:38.201 bus/uacce: not in enabled drivers build config 00:01:38.201 bus/vmbus: not in enabled drivers build config 00:01:38.201 common/cnxk: not in enabled drivers build config 00:01:38.201 common/mlx5: not in enabled drivers build config 00:01:38.201 common/nfp: not in enabled drivers build config 00:01:38.201 common/nitrox: not in enabled drivers build config 00:01:38.201 common/qat: not in enabled drivers build config 00:01:38.201 common/sfc_efx: not in enabled drivers build config 00:01:38.201 mempool/bucket: not in enabled drivers build config 00:01:38.201 mempool/cnxk: not in enabled drivers build config 00:01:38.201 mempool/dpaa: not in enabled drivers build config 00:01:38.201 mempool/dpaa2: not in enabled drivers build config 00:01:38.201 mempool/octeontx: not in enabled drivers build config 00:01:38.201 mempool/stack: not in enabled drivers build config 00:01:38.201 dma/cnxk: not in enabled drivers build config 00:01:38.201 dma/dpaa: not in enabled drivers build config 00:01:38.201 dma/dpaa2: not in enabled drivers build config 00:01:38.201 dma/hisilicon: not in enabled drivers build config 00:01:38.201 dma/idxd: not in enabled drivers build config 00:01:38.201 dma/ioat: not in enabled drivers build config 00:01:38.201 dma/skeleton: not in enabled drivers build config 00:01:38.201 net/af_packet: not in enabled drivers build config 00:01:38.201 net/af_xdp: not in enabled drivers build config 00:01:38.201 net/ark: not in enabled drivers build config 00:01:38.201 net/atlantic: not in enabled drivers build config 00:01:38.201 net/avp: not in enabled drivers build config 00:01:38.201 net/axgbe: not in enabled drivers build config 00:01:38.201 net/bnx2x: not in enabled drivers build config 00:01:38.201 net/bnxt: not in enabled drivers build config 00:01:38.201 net/bonding: not in enabled drivers build config 00:01:38.201 net/cnxk: not in enabled drivers build config 00:01:38.201 net/cpfl: not in enabled drivers build config 00:01:38.201 net/cxgbe: not in enabled drivers build config 00:01:38.201 net/dpaa: not in enabled drivers build config 00:01:38.201 net/dpaa2: not in enabled drivers build config 00:01:38.201 net/e1000: not in enabled drivers build config 00:01:38.201 net/ena: not in enabled drivers build config 00:01:38.201 net/enetc: not in enabled drivers build config 00:01:38.201 net/enetfec: not in enabled drivers build config 00:01:38.201 net/enic: not in enabled drivers build config 00:01:38.201 net/failsafe: not in enabled drivers build config 00:01:38.201 net/fm10k: not in enabled drivers build config 00:01:38.201 net/gve: not in enabled drivers build config 00:01:38.201 net/hinic: not in enabled drivers build config 00:01:38.201 net/hns3: not in enabled drivers build config 00:01:38.201 net/i40e: not in enabled drivers build config 00:01:38.201 net/iavf: not in enabled drivers build config 00:01:38.201 net/ice: not in enabled drivers build config 00:01:38.201 net/idpf: not in enabled drivers build config 00:01:38.201 net/igc: not in enabled drivers build config 00:01:38.201 net/ionic: not in enabled drivers build config 00:01:38.201 net/ipn3ke: not in enabled drivers build config 00:01:38.201 net/ixgbe: not in enabled drivers build config 00:01:38.201 net/mana: not in enabled drivers build config 00:01:38.201 net/memif: not in enabled drivers build config 00:01:38.201 net/mlx4: not in enabled drivers build config 00:01:38.201 net/mlx5: not in enabled drivers build config 00:01:38.201 net/mvneta: not in enabled drivers build config 00:01:38.201 net/mvpp2: not in enabled drivers build config 00:01:38.201 net/netvsc: not in enabled drivers build config 00:01:38.201 net/nfb: not in enabled drivers build config 00:01:38.201 net/nfp: not in enabled drivers build config 00:01:38.201 net/ngbe: not in enabled drivers build config 00:01:38.201 net/null: not in enabled drivers build config 00:01:38.201 net/octeontx: not in enabled drivers build config 00:01:38.201 net/octeon_ep: not in enabled drivers build config 00:01:38.201 net/pcap: not in enabled drivers build config 00:01:38.201 net/pfe: not in enabled drivers build config 00:01:38.201 net/qede: not in enabled drivers build config 00:01:38.201 net/ring: not in enabled drivers build config 00:01:38.201 net/sfc: not in enabled drivers build config 00:01:38.201 net/softnic: not in enabled drivers build config 00:01:38.201 net/tap: not in enabled drivers build config 00:01:38.201 net/thunderx: not in enabled drivers build config 00:01:38.201 net/txgbe: not in enabled drivers build config 00:01:38.201 net/vdev_netvsc: not in enabled drivers build config 00:01:38.201 net/vhost: not in enabled drivers build config 00:01:38.201 net/virtio: not in enabled drivers build config 00:01:38.201 net/vmxnet3: not in enabled drivers build config 00:01:38.201 raw/*: missing internal dependency, "rawdev" 00:01:38.201 crypto/armv8: not in enabled drivers build config 00:01:38.201 crypto/bcmfs: not in enabled drivers build config 00:01:38.201 crypto/caam_jr: not in enabled drivers build config 00:01:38.201 crypto/ccp: not in enabled drivers build config 00:01:38.201 crypto/cnxk: not in enabled drivers build config 00:01:38.201 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.201 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.201 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.201 crypto/mlx5: not in enabled drivers build config 00:01:38.201 crypto/mvsam: not in enabled drivers build config 00:01:38.201 crypto/nitrox: not in enabled drivers build config 00:01:38.201 crypto/null: not in enabled drivers build config 00:01:38.202 crypto/octeontx: not in enabled drivers build config 00:01:38.202 crypto/openssl: not in enabled drivers build config 00:01:38.202 crypto/scheduler: not in enabled drivers build config 00:01:38.202 crypto/uadk: not in enabled drivers build config 00:01:38.202 crypto/virtio: not in enabled drivers build config 00:01:38.202 compress/isal: not in enabled drivers build config 00:01:38.202 compress/mlx5: not in enabled drivers build config 00:01:38.202 compress/nitrox: not in enabled drivers build config 00:01:38.202 compress/octeontx: not in enabled drivers build config 00:01:38.202 compress/zlib: not in enabled drivers build config 00:01:38.202 regex/*: missing internal dependency, "regexdev" 00:01:38.202 ml/*: missing internal dependency, "mldev" 00:01:38.202 vdpa/ifc: not in enabled drivers build config 00:01:38.202 vdpa/mlx5: not in enabled drivers build config 00:01:38.202 vdpa/nfp: not in enabled drivers build config 00:01:38.202 vdpa/sfc: not in enabled drivers build config 00:01:38.202 event/*: missing internal dependency, "eventdev" 00:01:38.202 baseband/*: missing internal dependency, "bbdev" 00:01:38.202 gpu/*: missing internal dependency, "gpudev" 00:01:38.202 00:01:38.202 00:01:38.461 Build targets in project: 85 00:01:38.461 00:01:38.461 DPDK 24.03.0 00:01:38.461 00:01:38.461 User defined options 00:01:38.461 buildtype : debug 00:01:38.461 default_library : static 00:01:38.461 libdir : lib 00:01:38.461 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:38.461 c_args : -fPIC -Werror 00:01:38.461 c_link_args : 00:01:38.461 cpu_instruction_set: native 00:01:38.461 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:38.461 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:38.461 enable_docs : false 00:01:38.461 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:38.461 enable_kmods : false 00:01:38.461 max_lcores : 128 00:01:38.461 tests : false 00:01:38.461 00:01:38.461 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.720 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.985 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.985 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.985 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.985 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.985 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.985 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.985 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.985 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.985 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.985 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.985 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.985 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.985 [13/268] Linking static target lib/librte_kvargs.a 00:01:38.985 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.985 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.985 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.985 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.985 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.985 [19/268] Linking static target lib/librte_log.a 00:01:38.985 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.985 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.985 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.985 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.985 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.985 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.985 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.985 [27/268] Linking static target lib/librte_pci.a 00:01:38.985 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.985 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.985 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.985 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.985 [32/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.243 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.243 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.243 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.243 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.503 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.503 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.503 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.503 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.503 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.503 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.503 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.503 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.503 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.503 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.503 [47/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.503 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.503 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.503 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.503 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.503 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.503 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.503 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.503 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.503 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.503 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.503 [58/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.503 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.503 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.503 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.503 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.503 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.503 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.503 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.503 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.503 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.503 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.503 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.503 [70/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.503 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.503 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.503 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.503 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.503 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.503 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.503 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.503 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.503 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.503 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.503 [81/268] Linking static target lib/librte_telemetry.a 00:01:39.503 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.503 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.503 [84/268] Linking static target lib/librte_meter.a 00:01:39.503 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.503 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.503 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.503 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.503 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.503 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.503 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.503 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.503 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.503 [94/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.503 [95/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.503 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.503 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.503 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.503 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.503 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.503 [101/268] Linking static target lib/librte_ring.a 00:01:39.504 [102/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.504 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.504 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.504 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.504 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.504 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.504 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.504 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.504 [110/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.504 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.504 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.504 [113/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.504 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.504 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.504 [116/268] Linking static target lib/librte_timer.a 00:01:39.504 [117/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.504 [118/268] Linking static target lib/librte_cmdline.a 00:01:39.504 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.504 [120/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.504 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.504 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.504 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.504 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.504 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.504 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.504 [127/268] Linking static target lib/librte_eal.a 00:01:39.504 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.504 [129/268] Linking static target lib/librte_net.a 00:01:39.504 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.504 [131/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.504 [132/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.504 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.504 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.504 [135/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.504 [136/268] Linking static target lib/librte_mempool.a 00:01:39.504 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.504 [138/268] Linking static target lib/librte_rcu.a 00:01:39.504 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.763 [140/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.763 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.763 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.763 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.763 [144/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.763 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.763 [146/268] Linking static target lib/librte_compressdev.a 00:01:39.763 [147/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.763 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.763 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.763 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.763 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.763 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.763 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.763 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.763 [155/268] Linking static target lib/librte_hash.a 00:01:39.763 [156/268] Linking static target lib/librte_dmadev.a 00:01:39.763 [157/268] Linking static target lib/librte_mbuf.a 00:01:39.763 [158/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.763 [159/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.763 [160/268] Linking target lib/librte_log.so.24.1 00:01:39.763 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.763 [162/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.763 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.763 [164/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.763 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.763 [166/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.763 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.763 [168/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.763 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.763 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.023 [171/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.023 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.023 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.023 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.023 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.023 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.023 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.023 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.023 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.023 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.023 [182/268] Linking static target lib/librte_reorder.a 00:01:40.023 [183/268] Linking target lib/librte_kvargs.so.24.1 00:01:40.023 [184/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [185/268] Linking static target lib/librte_power.a 00:01:40.023 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.023 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.023 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.023 [189/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.023 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.023 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.023 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.023 [193/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [194/268] Linking static target lib/librte_cryptodev.a 00:01:40.023 [195/268] Linking static target lib/librte_security.a 00:01:40.023 [196/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.023 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.023 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.023 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.023 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.023 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:40.023 [202/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.023 [203/268] Linking target lib/librte_telemetry.so.24.1 00:01:40.023 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.023 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.281 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.281 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.281 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.281 [209/268] Linking static target lib/librte_ethdev.a 00:01:40.281 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:40.281 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:40.281 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.281 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.281 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.281 [215/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.281 [216/268] Linking static target drivers/librte_bus_pci.a 00:01:40.540 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.540 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.540 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.540 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.540 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.540 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.799 [223/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.799 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.799 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.799 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.799 [227/268] Linking static target lib/librte_vhost.a 00:01:41.059 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.059 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.439 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.008 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.576 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.868 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.868 [234/268] Linking target lib/librte_eal.so.24.1 00:01:52.868 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.868 [236/268] Linking target lib/librte_pci.so.24.1 00:01:52.868 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.868 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.868 [239/268] Linking target lib/librte_meter.so.24.1 00:01:52.868 [240/268] Linking target lib/librte_ring.so.24.1 00:01:52.868 [241/268] Linking target lib/librte_timer.so.24.1 00:01:52.868 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.868 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.868 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.868 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.868 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.868 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:52.868 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.868 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:53.128 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.128 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.128 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:53.128 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.128 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.387 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:53.387 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:53.387 [257/268] Linking target lib/librte_net.so.24.1 00:01:53.387 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:53.387 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.387 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.387 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:53.646 [262/268] Linking target lib/librte_security.so.24.1 00:01:53.646 [263/268] Linking target lib/librte_hash.so.24.1 00:01:53.646 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.646 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.646 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.646 [267/268] Linking target lib/librte_power.so.24.1 00:01:53.646 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:53.646 INFO: autodetecting backend as ninja 00:01:53.646 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:54.582 CC lib/log/log.o 00:01:54.582 CC lib/log/log_flags.o 00:01:54.582 CC lib/log/log_deprecated.o 00:01:54.840 CC lib/ut_mock/mock.o 00:01:54.841 CC lib/ut/ut.o 00:01:54.841 LIB libspdk_log.a 00:01:54.841 LIB libspdk_ut_mock.a 00:01:54.841 LIB libspdk_ut.a 00:01:55.099 CC lib/dma/dma.o 00:01:55.099 CXX lib/trace_parser/trace.o 00:01:55.099 CC lib/util/bit_array.o 00:01:55.099 CC lib/util/base64.o 00:01:55.099 CC lib/util/cpuset.o 00:01:55.099 CC lib/util/crc16.o 00:01:55.099 CC lib/util/crc32c.o 00:01:55.099 CC lib/util/crc32.o 00:01:55.099 CC lib/util/crc32_ieee.o 00:01:55.099 CC lib/util/crc64.o 00:01:55.099 CC lib/util/fd.o 00:01:55.099 CC lib/util/dif.o 00:01:55.099 CC lib/util/fd_group.o 00:01:55.099 CC lib/util/file.o 00:01:55.099 CC lib/util/hexlify.o 00:01:55.099 CC lib/util/iov.o 00:01:55.099 CC lib/ioat/ioat.o 00:01:55.099 CC lib/util/math.o 00:01:55.099 CC lib/util/net.o 00:01:55.099 CC lib/util/pipe.o 00:01:55.099 CC lib/util/strerror_tls.o 00:01:55.099 CC lib/util/uuid.o 00:01:55.099 CC lib/util/string.o 00:01:55.099 CC lib/util/xor.o 00:01:55.099 CC lib/util/zipf.o 00:01:55.099 CC lib/util/md5.o 00:01:55.358 LIB libspdk_dma.a 00:01:55.358 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.358 CC lib/vfio_user/host/vfio_user.o 00:01:55.358 LIB libspdk_ioat.a 00:01:55.358 LIB libspdk_vfio_user.a 00:01:55.358 LIB libspdk_util.a 00:01:55.617 LIB libspdk_trace_parser.a 00:01:55.876 CC lib/conf/conf.o 00:01:55.876 CC lib/vmd/vmd.o 00:01:55.876 CC lib/vmd/led.o 00:01:55.876 CC lib/idxd/idxd.o 00:01:55.876 CC lib/idxd/idxd_user.o 00:01:55.876 CC lib/json/json_write.o 00:01:55.876 CC lib/json/json_parse.o 00:01:55.876 CC lib/idxd/idxd_kernel.o 00:01:55.876 CC lib/json/json_util.o 00:01:55.876 CC lib/env_dpdk/pci.o 00:01:55.876 CC lib/env_dpdk/memory.o 00:01:55.876 CC lib/env_dpdk/env.o 00:01:55.876 CC lib/env_dpdk/threads.o 00:01:55.876 CC lib/env_dpdk/init.o 00:01:55.876 CC lib/env_dpdk/pci_vmd.o 00:01:55.876 CC lib/env_dpdk/pci_ioat.o 00:01:55.876 CC lib/env_dpdk/pci_virtio.o 00:01:55.876 CC lib/rdma_utils/rdma_utils.o 00:01:55.876 CC lib/env_dpdk/pci_idxd.o 00:01:55.876 CC lib/env_dpdk/pci_event.o 00:01:55.876 CC lib/env_dpdk/sigbus_handler.o 00:01:55.876 CC lib/env_dpdk/pci_dpdk.o 00:01:55.876 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.876 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.876 LIB libspdk_conf.a 00:01:55.876 LIB libspdk_rdma_utils.a 00:01:55.876 LIB libspdk_json.a 00:01:56.135 LIB libspdk_idxd.a 00:01:56.135 LIB libspdk_vmd.a 00:01:56.394 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.394 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.394 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.394 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.394 CC lib/rdma_provider/common.o 00:01:56.394 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.394 LIB libspdk_rdma_provider.a 00:01:56.394 LIB libspdk_jsonrpc.a 00:01:56.653 LIB libspdk_env_dpdk.a 00:01:56.653 CC lib/rpc/rpc.o 00:01:56.912 LIB libspdk_rpc.a 00:01:57.172 CC lib/trace/trace_rpc.o 00:01:57.172 CC lib/trace/trace.o 00:01:57.172 CC lib/trace/trace_flags.o 00:01:57.172 CC lib/notify/notify.o 00:01:57.172 CC lib/notify/notify_rpc.o 00:01:57.172 CC lib/keyring/keyring.o 00:01:57.172 CC lib/keyring/keyring_rpc.o 00:01:57.431 LIB libspdk_notify.a 00:01:57.431 LIB libspdk_trace.a 00:01:57.431 LIB libspdk_keyring.a 00:01:57.691 CC lib/thread/thread.o 00:01:57.691 CC lib/thread/iobuf.o 00:01:57.691 CC lib/sock/sock.o 00:01:57.691 CC lib/sock/sock_rpc.o 00:01:57.950 LIB libspdk_sock.a 00:01:58.208 CC lib/nvme/nvme_ctrlr.o 00:01:58.208 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.208 CC lib/nvme/nvme_ns_cmd.o 00:01:58.208 CC lib/nvme/nvme_ns.o 00:01:58.208 CC lib/nvme/nvme_fabric.o 00:01:58.208 CC lib/nvme/nvme_pcie_common.o 00:01:58.208 CC lib/nvme/nvme_pcie.o 00:01:58.208 CC lib/nvme/nvme_qpair.o 00:01:58.208 CC lib/nvme/nvme.o 00:01:58.208 CC lib/nvme/nvme_quirks.o 00:01:58.208 CC lib/nvme/nvme_transport.o 00:01:58.208 CC lib/nvme/nvme_discovery.o 00:01:58.208 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.208 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.208 CC lib/nvme/nvme_tcp.o 00:01:58.208 CC lib/nvme/nvme_opal.o 00:01:58.208 CC lib/nvme/nvme_io_msg.o 00:01:58.208 CC lib/nvme/nvme_poll_group.o 00:01:58.208 CC lib/nvme/nvme_zns.o 00:01:58.208 CC lib/nvme/nvme_stubs.o 00:01:58.208 CC lib/nvme/nvme_auth.o 00:01:58.208 CC lib/nvme/nvme_cuse.o 00:01:58.208 CC lib/nvme/nvme_vfio_user.o 00:01:58.208 CC lib/nvme/nvme_rdma.o 00:01:58.467 LIB libspdk_thread.a 00:01:58.726 CC lib/vfu_tgt/tgt_rpc.o 00:01:58.726 CC lib/vfu_tgt/tgt_endpoint.o 00:01:58.726 CC lib/init/json_config.o 00:01:58.726 CC lib/init/subsystem.o 00:01:58.726 CC lib/init/subsystem_rpc.o 00:01:58.726 CC lib/init/rpc.o 00:01:58.726 CC lib/fsdev/fsdev.o 00:01:58.726 CC lib/fsdev/fsdev_io.o 00:01:58.726 CC lib/virtio/virtio.o 00:01:58.726 CC lib/fsdev/fsdev_rpc.o 00:01:58.726 CC lib/virtio/virtio_vhost_user.o 00:01:58.726 CC lib/virtio/virtio_vfio_user.o 00:01:58.726 CC lib/virtio/virtio_pci.o 00:01:58.726 CC lib/blob/blobstore.o 00:01:58.726 CC lib/blob/request.o 00:01:58.726 CC lib/accel/accel.o 00:01:58.726 CC lib/blob/zeroes.o 00:01:58.726 CC lib/accel/accel_rpc.o 00:01:58.726 CC lib/blob/blob_bs_dev.o 00:01:58.726 CC lib/accel/accel_sw.o 00:01:58.985 LIB libspdk_init.a 00:01:58.985 LIB libspdk_vfu_tgt.a 00:01:58.985 LIB libspdk_virtio.a 00:01:58.985 LIB libspdk_fsdev.a 00:01:59.244 CC lib/event/log_rpc.o 00:01:59.244 CC lib/event/app.o 00:01:59.244 CC lib/event/app_rpc.o 00:01:59.244 CC lib/event/reactor.o 00:01:59.244 CC lib/event/scheduler_static.o 00:01:59.503 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:59.503 LIB libspdk_event.a 00:01:59.503 LIB libspdk_accel.a 00:01:59.503 LIB libspdk_nvme.a 00:01:59.762 CC lib/bdev/bdev.o 00:01:59.762 CC lib/bdev/part.o 00:01:59.762 CC lib/bdev/bdev_rpc.o 00:01:59.762 CC lib/bdev/bdev_zone.o 00:01:59.762 CC lib/bdev/scsi_nvme.o 00:01:59.762 LIB libspdk_fuse_dispatcher.a 00:02:00.330 LIB libspdk_blob.a 00:02:00.897 CC lib/lvol/lvol.o 00:02:00.897 CC lib/blobfs/blobfs.o 00:02:00.897 CC lib/blobfs/tree.o 00:02:01.156 LIB libspdk_lvol.a 00:02:01.156 LIB libspdk_blobfs.a 00:02:01.416 LIB libspdk_bdev.a 00:02:01.675 CC lib/scsi/port.o 00:02:01.676 CC lib/scsi/dev.o 00:02:01.676 CC lib/scsi/lun.o 00:02:01.676 CC lib/scsi/scsi_bdev.o 00:02:01.676 CC lib/scsi/scsi.o 00:02:01.676 CC lib/scsi/task.o 00:02:01.676 CC lib/scsi/scsi_pr.o 00:02:01.676 CC lib/scsi/scsi_rpc.o 00:02:01.676 CC lib/ublk/ublk.o 00:02:01.676 CC lib/ftl/ftl_core.o 00:02:01.676 CC lib/ublk/ublk_rpc.o 00:02:01.676 CC lib/ftl/ftl_init.o 00:02:01.676 CC lib/ftl/ftl_layout.o 00:02:01.676 CC lib/nvmf/ctrlr.o 00:02:01.676 CC lib/ftl/ftl_debug.o 00:02:01.676 CC lib/nvmf/ctrlr_discovery.o 00:02:01.676 CC lib/ftl/ftl_io.o 00:02:01.676 CC lib/nvmf/ctrlr_bdev.o 00:02:01.935 CC lib/ftl/ftl_sb.o 00:02:01.935 CC lib/nvmf/subsystem.o 00:02:01.935 CC lib/ftl/ftl_l2p.o 00:02:01.935 CC lib/nvmf/nvmf.o 00:02:01.935 CC lib/ftl/ftl_l2p_flat.o 00:02:01.935 CC lib/nvmf/nvmf_rpc.o 00:02:01.935 CC lib/ftl/ftl_nv_cache.o 00:02:01.935 CC lib/nbd/nbd.o 00:02:01.935 CC lib/nvmf/transport.o 00:02:01.935 CC lib/ftl/ftl_band.o 00:02:01.935 CC lib/nvmf/tcp.o 00:02:01.935 CC lib/nbd/nbd_rpc.o 00:02:01.935 CC lib/ftl/ftl_writer.o 00:02:01.935 CC lib/ftl/ftl_band_ops.o 00:02:01.935 CC lib/nvmf/stubs.o 00:02:01.935 CC lib/nvmf/mdns_server.o 00:02:01.935 CC lib/ftl/ftl_rq.o 00:02:01.935 CC lib/nvmf/vfio_user.o 00:02:01.935 CC lib/ftl/ftl_reloc.o 00:02:01.935 CC lib/nvmf/rdma.o 00:02:01.935 CC lib/ftl/ftl_l2p_cache.o 00:02:01.935 CC lib/nvmf/auth.o 00:02:01.935 CC lib/ftl/ftl_p2l.o 00:02:01.935 CC lib/ftl/ftl_p2l_log.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:01.935 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:01.935 CC lib/ftl/utils/ftl_conf.o 00:02:01.935 CC lib/ftl/utils/ftl_mempool.o 00:02:01.935 CC lib/ftl/utils/ftl_md.o 00:02:01.935 CC lib/ftl/utils/ftl_bitmap.o 00:02:01.935 CC lib/ftl/utils/ftl_property.o 00:02:01.935 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:01.935 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:01.935 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:01.935 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:01.935 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:01.935 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:01.935 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:01.935 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:01.935 CC lib/ftl/base/ftl_base_dev.o 00:02:01.935 CC lib/ftl/base/ftl_base_bdev.o 00:02:01.935 CC lib/ftl/ftl_trace.o 00:02:02.194 LIB libspdk_nbd.a 00:02:02.194 LIB libspdk_scsi.a 00:02:02.194 LIB libspdk_ublk.a 00:02:02.451 LIB libspdk_ftl.a 00:02:02.451 CC lib/iscsi/conn.o 00:02:02.451 CC lib/iscsi/param.o 00:02:02.451 CC lib/iscsi/init_grp.o 00:02:02.451 CC lib/iscsi/iscsi.o 00:02:02.451 CC lib/iscsi/portal_grp.o 00:02:02.451 CC lib/iscsi/iscsi_subsystem.o 00:02:02.451 CC lib/iscsi/tgt_node.o 00:02:02.451 CC lib/vhost/vhost.o 00:02:02.451 CC lib/vhost/rte_vhost_user.o 00:02:02.451 CC lib/vhost/vhost_rpc.o 00:02:02.451 CC lib/iscsi/iscsi_rpc.o 00:02:02.451 CC lib/vhost/vhost_scsi.o 00:02:02.451 CC lib/iscsi/task.o 00:02:02.451 CC lib/vhost/vhost_blk.o 00:02:03.018 LIB libspdk_vhost.a 00:02:03.018 LIB libspdk_nvmf.a 00:02:03.277 LIB libspdk_iscsi.a 00:02:03.535 CC module/vfu_device/vfu_virtio.o 00:02:03.535 CC module/vfu_device/vfu_virtio_blk.o 00:02:03.535 CC module/vfu_device/vfu_virtio_fs.o 00:02:03.535 CC module/vfu_device/vfu_virtio_scsi.o 00:02:03.535 CC module/vfu_device/vfu_virtio_rpc.o 00:02:03.535 CC module/env_dpdk/env_dpdk_rpc.o 00:02:03.793 CC module/accel/error/accel_error.o 00:02:03.793 CC module/accel/error/accel_error_rpc.o 00:02:03.793 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.793 CC module/keyring/linux/keyring.o 00:02:03.793 CC module/blob/bdev/blob_bdev.o 00:02:03.793 CC module/keyring/linux/keyring_rpc.o 00:02:03.793 CC module/accel/iaa/accel_iaa_rpc.o 00:02:03.793 CC module/accel/iaa/accel_iaa.o 00:02:03.793 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:03.793 CC module/fsdev/aio/fsdev_aio.o 00:02:03.793 CC module/fsdev/aio/linux_aio_mgr.o 00:02:03.793 CC module/accel/ioat/accel_ioat.o 00:02:03.793 CC module/keyring/file/keyring_rpc.o 00:02:03.793 CC module/keyring/file/keyring.o 00:02:03.793 CC module/sock/posix/posix.o 00:02:03.793 CC module/accel/ioat/accel_ioat_rpc.o 00:02:03.793 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:03.793 LIB libspdk_env_dpdk_rpc.a 00:02:03.793 CC module/accel/dsa/accel_dsa.o 00:02:03.793 CC module/accel/dsa/accel_dsa_rpc.o 00:02:03.793 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:03.793 LIB libspdk_keyring_linux.a 00:02:03.793 LIB libspdk_scheduler_gscheduler.a 00:02:03.793 LIB libspdk_keyring_file.a 00:02:03.793 LIB libspdk_accel_error.a 00:02:03.793 LIB libspdk_accel_iaa.a 00:02:03.793 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.793 LIB libspdk_accel_ioat.a 00:02:03.793 LIB libspdk_scheduler_dynamic.a 00:02:04.051 LIB libspdk_blob_bdev.a 00:02:04.051 LIB libspdk_accel_dsa.a 00:02:04.051 LIB libspdk_vfu_device.a 00:02:04.308 LIB libspdk_fsdev_aio.a 00:02:04.308 LIB libspdk_sock_posix.a 00:02:04.308 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:04.308 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:04.308 CC module/bdev/nvme/bdev_nvme.o 00:02:04.308 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:04.308 CC module/bdev/nvme/bdev_mdns_client.o 00:02:04.308 CC module/bdev/nvme/nvme_rpc.o 00:02:04.308 CC module/bdev/delay/vbdev_delay.o 00:02:04.308 CC module/bdev/nvme/vbdev_opal.o 00:02:04.308 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:04.308 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:04.308 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:04.308 CC module/bdev/passthru/vbdev_passthru.o 00:02:04.308 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:04.308 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:04.308 CC module/bdev/lvol/vbdev_lvol.o 00:02:04.308 CC module/bdev/aio/bdev_aio.o 00:02:04.308 CC module/bdev/aio/bdev_aio_rpc.o 00:02:04.308 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:04.308 CC module/bdev/malloc/bdev_malloc.o 00:02:04.308 CC module/bdev/error/vbdev_error_rpc.o 00:02:04.308 CC module/bdev/error/vbdev_error.o 00:02:04.308 CC module/bdev/gpt/vbdev_gpt.o 00:02:04.308 CC module/bdev/gpt/gpt.o 00:02:04.308 CC module/bdev/split/vbdev_split_rpc.o 00:02:04.308 CC module/bdev/split/vbdev_split.o 00:02:04.308 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:04.308 CC module/blobfs/bdev/blobfs_bdev.o 00:02:04.308 CC module/bdev/raid/bdev_raid.o 00:02:04.308 CC module/bdev/raid/bdev_raid_sb.o 00:02:04.308 CC module/bdev/raid/bdev_raid_rpc.o 00:02:04.308 CC module/bdev/raid/raid0.o 00:02:04.308 CC module/bdev/null/bdev_null.o 00:02:04.308 CC module/bdev/raid/raid1.o 00:02:04.308 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:04.308 CC module/bdev/null/bdev_null_rpc.o 00:02:04.308 CC module/bdev/raid/concat.o 00:02:04.308 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:04.309 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:04.309 CC module/bdev/iscsi/bdev_iscsi.o 00:02:04.309 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:04.309 CC module/bdev/ftl/bdev_ftl.o 00:02:04.309 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:04.567 LIB libspdk_blobfs_bdev.a 00:02:04.567 LIB libspdk_bdev_split.a 00:02:04.567 LIB libspdk_bdev_gpt.a 00:02:04.567 LIB libspdk_bdev_error.a 00:02:04.567 LIB libspdk_bdev_passthru.a 00:02:04.567 LIB libspdk_bdev_null.a 00:02:04.567 LIB libspdk_bdev_zone_block.a 00:02:04.567 LIB libspdk_bdev_delay.a 00:02:04.567 LIB libspdk_bdev_ftl.a 00:02:04.567 LIB libspdk_bdev_aio.a 00:02:04.567 LIB libspdk_bdev_iscsi.a 00:02:04.567 LIB libspdk_bdev_malloc.a 00:02:04.826 LIB libspdk_bdev_lvol.a 00:02:04.826 LIB libspdk_bdev_virtio.a 00:02:05.085 LIB libspdk_bdev_raid.a 00:02:06.022 LIB libspdk_bdev_nvme.a 00:02:06.592 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.592 CC module/event/subsystems/vmd/vmd.o 00:02:06.592 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.592 CC module/event/subsystems/keyring/keyring.o 00:02:06.592 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.592 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.592 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:06.592 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.592 CC module/event/subsystems/fsdev/fsdev.o 00:02:06.592 CC module/event/subsystems/sock/sock.o 00:02:06.592 LIB libspdk_event_keyring.a 00:02:06.592 LIB libspdk_event_vmd.a 00:02:06.592 LIB libspdk_event_vhost_blk.a 00:02:06.592 LIB libspdk_event_vfu_tgt.a 00:02:06.592 LIB libspdk_event_iobuf.a 00:02:06.592 LIB libspdk_event_fsdev.a 00:02:06.592 LIB libspdk_event_scheduler.a 00:02:06.592 LIB libspdk_event_sock.a 00:02:06.851 CC module/event/subsystems/accel/accel.o 00:02:07.111 LIB libspdk_event_accel.a 00:02:07.372 CC module/event/subsystems/bdev/bdev.o 00:02:07.372 LIB libspdk_event_bdev.a 00:02:07.632 CC module/event/subsystems/ublk/ublk.o 00:02:07.632 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:07.632 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:07.632 CC module/event/subsystems/scsi/scsi.o 00:02:07.632 CC module/event/subsystems/nbd/nbd.o 00:02:07.892 LIB libspdk_event_ublk.a 00:02:07.892 LIB libspdk_event_nbd.a 00:02:07.892 LIB libspdk_event_scsi.a 00:02:07.892 LIB libspdk_event_nvmf.a 00:02:08.151 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:08.151 CC module/event/subsystems/iscsi/iscsi.o 00:02:08.411 LIB libspdk_event_vhost_scsi.a 00:02:08.411 LIB libspdk_event_iscsi.a 00:02:08.673 CC app/spdk_nvme_perf/perf.o 00:02:08.673 TEST_HEADER include/spdk/accel.h 00:02:08.673 TEST_HEADER include/spdk/accel_module.h 00:02:08.673 TEST_HEADER include/spdk/barrier.h 00:02:08.673 CC app/spdk_nvme_identify/identify.o 00:02:08.673 TEST_HEADER include/spdk/assert.h 00:02:08.673 TEST_HEADER include/spdk/bdev_module.h 00:02:08.673 TEST_HEADER include/spdk/bdev.h 00:02:08.673 TEST_HEADER include/spdk/base64.h 00:02:08.673 TEST_HEADER include/spdk/bdev_zone.h 00:02:08.673 TEST_HEADER include/spdk/bit_array.h 00:02:08.673 TEST_HEADER include/spdk/blob_bdev.h 00:02:08.673 TEST_HEADER include/spdk/bit_pool.h 00:02:08.673 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:08.673 TEST_HEADER include/spdk/blobfs.h 00:02:08.673 TEST_HEADER include/spdk/blob.h 00:02:08.673 CC app/spdk_lspci/spdk_lspci.o 00:02:08.673 TEST_HEADER include/spdk/config.h 00:02:08.673 TEST_HEADER include/spdk/conf.h 00:02:08.673 TEST_HEADER include/spdk/cpuset.h 00:02:08.673 TEST_HEADER include/spdk/crc16.h 00:02:08.673 TEST_HEADER include/spdk/crc64.h 00:02:08.673 TEST_HEADER include/spdk/crc32.h 00:02:08.673 CC test/rpc_client/rpc_client_test.o 00:02:08.673 TEST_HEADER include/spdk/dif.h 00:02:08.673 CC app/trace_record/trace_record.o 00:02:08.673 TEST_HEADER include/spdk/endian.h 00:02:08.673 TEST_HEADER include/spdk/dma.h 00:02:08.673 TEST_HEADER include/spdk/event.h 00:02:08.673 TEST_HEADER include/spdk/env_dpdk.h 00:02:08.673 TEST_HEADER include/spdk/env.h 00:02:08.673 TEST_HEADER include/spdk/fd.h 00:02:08.673 TEST_HEADER include/spdk/fd_group.h 00:02:08.673 CXX app/trace/trace.o 00:02:08.673 CC app/spdk_nvme_discover/discovery_aer.o 00:02:08.673 CC app/spdk_top/spdk_top.o 00:02:08.673 TEST_HEADER include/spdk/file.h 00:02:08.673 TEST_HEADER include/spdk/fsdev_module.h 00:02:08.673 TEST_HEADER include/spdk/fsdev.h 00:02:08.673 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:08.673 TEST_HEADER include/spdk/ftl.h 00:02:08.673 TEST_HEADER include/spdk/gpt_spec.h 00:02:08.673 TEST_HEADER include/spdk/hexlify.h 00:02:08.673 TEST_HEADER include/spdk/histogram_data.h 00:02:08.673 TEST_HEADER include/spdk/idxd.h 00:02:08.673 TEST_HEADER include/spdk/idxd_spec.h 00:02:08.673 TEST_HEADER include/spdk/ioat_spec.h 00:02:08.673 TEST_HEADER include/spdk/init.h 00:02:08.673 TEST_HEADER include/spdk/json.h 00:02:08.673 TEST_HEADER include/spdk/iscsi_spec.h 00:02:08.673 TEST_HEADER include/spdk/ioat.h 00:02:08.673 TEST_HEADER include/spdk/jsonrpc.h 00:02:08.673 TEST_HEADER include/spdk/keyring_module.h 00:02:08.673 TEST_HEADER include/spdk/keyring.h 00:02:08.673 TEST_HEADER include/spdk/likely.h 00:02:08.673 TEST_HEADER include/spdk/log.h 00:02:08.673 TEST_HEADER include/spdk/lvol.h 00:02:08.674 TEST_HEADER include/spdk/md5.h 00:02:08.674 TEST_HEADER include/spdk/mmio.h 00:02:08.674 TEST_HEADER include/spdk/memory.h 00:02:08.674 TEST_HEADER include/spdk/nvme.h 00:02:08.674 TEST_HEADER include/spdk/nbd.h 00:02:08.674 TEST_HEADER include/spdk/net.h 00:02:08.674 TEST_HEADER include/spdk/notify.h 00:02:08.674 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:08.674 TEST_HEADER include/spdk/nvme_spec.h 00:02:08.674 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:08.674 TEST_HEADER include/spdk/nvme_intel.h 00:02:08.674 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:08.674 TEST_HEADER include/spdk/nvme_zns.h 00:02:08.674 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:08.674 TEST_HEADER include/spdk/nvmf.h 00:02:08.674 TEST_HEADER include/spdk/opal.h 00:02:08.674 TEST_HEADER include/spdk/pci_ids.h 00:02:08.674 TEST_HEADER include/spdk/nvmf_spec.h 00:02:08.674 TEST_HEADER include/spdk/opal_spec.h 00:02:08.674 TEST_HEADER include/spdk/nvmf_transport.h 00:02:08.674 TEST_HEADER include/spdk/pipe.h 00:02:08.674 TEST_HEADER include/spdk/queue.h 00:02:08.674 TEST_HEADER include/spdk/rpc.h 00:02:08.674 TEST_HEADER include/spdk/scheduler.h 00:02:08.674 TEST_HEADER include/spdk/reduce.h 00:02:08.674 TEST_HEADER include/spdk/scsi.h 00:02:08.674 CC app/iscsi_tgt/iscsi_tgt.o 00:02:08.674 TEST_HEADER include/spdk/scsi_spec.h 00:02:08.674 TEST_HEADER include/spdk/thread.h 00:02:08.674 TEST_HEADER include/spdk/stdinc.h 00:02:08.674 TEST_HEADER include/spdk/sock.h 00:02:08.674 TEST_HEADER include/spdk/trace.h 00:02:08.674 TEST_HEADER include/spdk/trace_parser.h 00:02:08.674 TEST_HEADER include/spdk/string.h 00:02:08.674 CC app/spdk_dd/spdk_dd.o 00:02:08.674 TEST_HEADER include/spdk/uuid.h 00:02:08.674 TEST_HEADER include/spdk/ublk.h 00:02:08.674 TEST_HEADER include/spdk/tree.h 00:02:08.674 TEST_HEADER include/spdk/version.h 00:02:08.674 TEST_HEADER include/spdk/util.h 00:02:08.674 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:08.674 TEST_HEADER include/spdk/vmd.h 00:02:08.674 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:08.674 TEST_HEADER include/spdk/vhost.h 00:02:08.674 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:08.674 CXX test/cpp_headers/accel.o 00:02:08.674 TEST_HEADER include/spdk/xor.h 00:02:08.674 TEST_HEADER include/spdk/zipf.h 00:02:08.674 CXX test/cpp_headers/assert.o 00:02:08.674 CXX test/cpp_headers/accel_module.o 00:02:08.674 CXX test/cpp_headers/barrier.o 00:02:08.674 CXX test/cpp_headers/base64.o 00:02:08.674 CXX test/cpp_headers/bdev.o 00:02:08.674 CXX test/cpp_headers/bdev_module.o 00:02:08.674 CXX test/cpp_headers/bit_pool.o 00:02:08.674 CXX test/cpp_headers/blob_bdev.o 00:02:08.674 CXX test/cpp_headers/blobfs_bdev.o 00:02:08.674 CXX test/cpp_headers/bit_array.o 00:02:08.674 CXX test/cpp_headers/bdev_zone.o 00:02:08.674 CXX test/cpp_headers/blobfs.o 00:02:08.674 CXX test/cpp_headers/config.o 00:02:08.674 CXX test/cpp_headers/cpuset.o 00:02:08.674 CXX test/cpp_headers/conf.o 00:02:08.674 CXX test/cpp_headers/crc32.o 00:02:08.674 CXX test/cpp_headers/blob.o 00:02:08.674 CC app/spdk_tgt/spdk_tgt.o 00:02:08.674 CC app/nvmf_tgt/nvmf_main.o 00:02:08.674 CXX test/cpp_headers/crc16.o 00:02:08.674 CXX test/cpp_headers/crc64.o 00:02:08.674 CXX test/cpp_headers/dma.o 00:02:08.674 CXX test/cpp_headers/endian.o 00:02:08.674 CXX test/cpp_headers/dif.o 00:02:08.674 CXX test/cpp_headers/env_dpdk.o 00:02:08.674 CXX test/cpp_headers/fd_group.o 00:02:08.674 CXX test/cpp_headers/event.o 00:02:08.674 CXX test/cpp_headers/env.o 00:02:08.674 CXX test/cpp_headers/fd.o 00:02:08.674 CXX test/cpp_headers/fsdev_module.o 00:02:08.674 CXX test/cpp_headers/fsdev.o 00:02:08.674 CXX test/cpp_headers/ftl.o 00:02:08.674 CXX test/cpp_headers/file.o 00:02:08.674 CXX test/cpp_headers/fuse_dispatcher.o 00:02:08.674 CXX test/cpp_headers/hexlify.o 00:02:08.674 CXX test/cpp_headers/histogram_data.o 00:02:08.674 CXX test/cpp_headers/gpt_spec.o 00:02:08.674 CXX test/cpp_headers/idxd.o 00:02:08.674 CXX test/cpp_headers/ioat.o 00:02:08.674 CXX test/cpp_headers/init.o 00:02:08.674 CXX test/cpp_headers/idxd_spec.o 00:02:08.674 CXX test/cpp_headers/ioat_spec.o 00:02:08.674 CXX test/cpp_headers/iscsi_spec.o 00:02:08.674 CXX test/cpp_headers/json.o 00:02:08.674 CXX test/cpp_headers/jsonrpc.o 00:02:08.674 CXX test/cpp_headers/keyring.o 00:02:08.674 CXX test/cpp_headers/likely.o 00:02:08.674 CXX test/cpp_headers/keyring_module.o 00:02:08.674 CXX test/cpp_headers/lvol.o 00:02:08.674 CXX test/cpp_headers/log.o 00:02:08.674 CXX test/cpp_headers/md5.o 00:02:08.674 CXX test/cpp_headers/memory.o 00:02:08.674 CXX test/cpp_headers/nbd.o 00:02:08.674 CXX test/cpp_headers/mmio.o 00:02:08.674 CXX test/cpp_headers/net.o 00:02:08.674 CXX test/cpp_headers/notify.o 00:02:08.674 CXX test/cpp_headers/nvme.o 00:02:08.674 CXX test/cpp_headers/nvme_intel.o 00:02:08.674 CXX test/cpp_headers/nvme_ocssd.o 00:02:08.674 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:08.674 CXX test/cpp_headers/nvme_spec.o 00:02:08.674 CXX test/cpp_headers/nvme_zns.o 00:02:08.674 CXX test/cpp_headers/nvmf_cmd.o 00:02:08.674 CC test/thread/lock/spdk_lock.o 00:02:08.674 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:08.674 CXX test/cpp_headers/nvmf.o 00:02:08.674 CXX test/cpp_headers/nvmf_spec.o 00:02:08.674 CXX test/cpp_headers/nvmf_transport.o 00:02:08.674 CXX test/cpp_headers/opal_spec.o 00:02:08.674 CXX test/cpp_headers/opal.o 00:02:08.674 CXX test/cpp_headers/pci_ids.o 00:02:08.674 CXX test/cpp_headers/pipe.o 00:02:08.674 CXX test/cpp_headers/queue.o 00:02:08.674 CXX test/cpp_headers/rpc.o 00:02:08.674 CXX test/cpp_headers/reduce.o 00:02:08.674 CXX test/cpp_headers/scsi.o 00:02:08.674 CXX test/cpp_headers/scheduler.o 00:02:08.674 CXX test/cpp_headers/scsi_spec.o 00:02:08.674 CXX test/cpp_headers/stdinc.o 00:02:08.674 CXX test/cpp_headers/sock.o 00:02:08.674 CXX test/cpp_headers/string.o 00:02:08.674 CXX test/cpp_headers/thread.o 00:02:08.674 CXX test/cpp_headers/trace.o 00:02:08.674 CXX test/cpp_headers/trace_parser.o 00:02:08.674 CC test/app/histogram_perf/histogram_perf.o 00:02:08.674 CC examples/util/zipf/zipf.o 00:02:08.674 CC test/thread/poller_perf/poller_perf.o 00:02:08.674 CC examples/ioat/verify/verify.o 00:02:08.674 CC test/env/pci/pci_ut.o 00:02:08.674 CC test/env/vtophys/vtophys.o 00:02:08.674 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:08.674 LINK spdk_lspci 00:02:08.674 CXX test/cpp_headers/tree.o 00:02:08.934 CC examples/ioat/perf/perf.o 00:02:08.934 CC test/app/jsoncat/jsoncat.o 00:02:08.934 CC test/app/stub/stub.o 00:02:08.934 CC test/env/memory/memory_ut.o 00:02:08.934 CC app/fio/nvme/fio_plugin.o 00:02:08.934 CXX test/cpp_headers/ublk.o 00:02:08.934 CC test/dma/test_dma/test_dma.o 00:02:08.934 CXX test/cpp_headers/util.o 00:02:08.934 LINK rpc_client_test 00:02:08.934 CC app/fio/bdev/fio_plugin.o 00:02:08.934 CC test/app/bdev_svc/bdev_svc.o 00:02:08.934 LINK spdk_nvme_discover 00:02:08.934 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.934 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.934 LINK spdk_trace_record 00:02:08.934 CXX test/cpp_headers/uuid.o 00:02:08.934 CXX test/cpp_headers/version.o 00:02:08.934 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.934 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.934 CXX test/cpp_headers/vhost.o 00:02:08.934 LINK interrupt_tgt 00:02:08.934 CXX test/cpp_headers/vmd.o 00:02:08.934 CXX test/cpp_headers/xor.o 00:02:08.934 CXX test/cpp_headers/zipf.o 00:02:08.934 LINK nvmf_tgt 00:02:08.934 LINK poller_perf 00:02:08.934 LINK histogram_perf 00:02:08.934 LINK vtophys 00:02:08.934 LINK jsoncat 00:02:08.934 LINK zipf 00:02:08.934 LINK iscsi_tgt 00:02:08.934 LINK env_dpdk_post_init 00:02:08.934 LINK stub 00:02:08.934 LINK spdk_tgt 00:02:08.934 LINK verify 00:02:08.934 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.934 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.934 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.934 LINK ioat_perf 00:02:09.193 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:09.193 LINK spdk_trace 00:02:09.193 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:09.193 LINK bdev_svc 00:02:09.193 LINK spdk_dd 00:02:09.193 LINK pci_ut 00:02:09.193 LINK nvme_fuzz 00:02:09.193 LINK test_dma 00:02:09.193 LINK spdk_nvme_perf 00:02:09.193 LINK llvm_vfio_fuzz 00:02:09.193 LINK spdk_nvme_identify 00:02:09.193 LINK vhost_fuzz 00:02:09.451 LINK spdk_nvme 00:02:09.451 LINK spdk_bdev 00:02:09.451 LINK spdk_top 00:02:09.451 LINK mem_callbacks 00:02:09.451 CC examples/idxd/perf/perf.o 00:02:09.451 CC examples/vmd/lsvmd/lsvmd.o 00:02:09.451 CC examples/vmd/led/led.o 00:02:09.451 LINK llvm_nvme_fuzz 00:02:09.451 CC examples/sock/hello_world/hello_sock.o 00:02:09.451 CC app/vhost/vhost.o 00:02:09.710 CC examples/thread/thread/thread_ex.o 00:02:09.710 LINK lsvmd 00:02:09.710 LINK led 00:02:09.710 LINK vhost 00:02:09.710 LINK hello_sock 00:02:09.710 LINK idxd_perf 00:02:09.710 LINK thread 00:02:09.710 LINK spdk_lock 00:02:09.710 LINK memory_ut 00:02:09.969 LINK iscsi_fuzz 00:02:10.537 CC examples/nvme/hotplug/hotplug.o 00:02:10.537 CC examples/nvme/abort/abort.o 00:02:10.537 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:10.537 CC test/event/reactor/reactor.o 00:02:10.537 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:10.537 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:10.537 CC examples/nvme/hello_world/hello_world.o 00:02:10.537 CC examples/nvme/arbitration/arbitration.o 00:02:10.537 CC test/event/event_perf/event_perf.o 00:02:10.537 CC test/event/reactor_perf/reactor_perf.o 00:02:10.537 CC examples/nvme/reconnect/reconnect.o 00:02:10.537 CC test/event/app_repeat/app_repeat.o 00:02:10.537 CC test/event/scheduler/scheduler.o 00:02:10.537 LINK reactor 00:02:10.537 LINK event_perf 00:02:10.537 LINK reactor_perf 00:02:10.537 LINK pmr_persistence 00:02:10.537 LINK hotplug 00:02:10.537 LINK cmb_copy 00:02:10.537 LINK app_repeat 00:02:10.537 LINK hello_world 00:02:10.795 LINK scheduler 00:02:10.795 LINK abort 00:02:10.795 LINK arbitration 00:02:10.795 LINK reconnect 00:02:10.795 LINK nvme_manage 00:02:10.795 CC test/nvme/startup/startup.o 00:02:10.795 CC test/nvme/compliance/nvme_compliance.o 00:02:10.795 CC test/nvme/fdp/fdp.o 00:02:10.795 CC test/nvme/reserve/reserve.o 00:02:10.795 CC test/nvme/reset/reset.o 00:02:10.795 CC test/nvme/cuse/cuse.o 00:02:10.795 CC test/nvme/boot_partition/boot_partition.o 00:02:10.795 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:10.795 CC test/nvme/e2edp/nvme_dp.o 00:02:10.795 CC test/nvme/err_injection/err_injection.o 00:02:10.795 CC test/nvme/aer/aer.o 00:02:10.795 CC test/nvme/connect_stress/connect_stress.o 00:02:10.795 CC test/nvme/fused_ordering/fused_ordering.o 00:02:10.795 CC test/nvme/sgl/sgl.o 00:02:10.795 CC test/nvme/simple_copy/simple_copy.o 00:02:10.795 CC test/nvme/overhead/overhead.o 00:02:11.054 CC test/accel/dif/dif.o 00:02:11.054 CC test/blobfs/mkfs/mkfs.o 00:02:11.054 LINK startup 00:02:11.054 LINK err_injection 00:02:11.054 LINK connect_stress 00:02:11.054 CC test/lvol/esnap/esnap.o 00:02:11.054 LINK boot_partition 00:02:11.054 LINK reserve 00:02:11.054 LINK doorbell_aers 00:02:11.054 LINK fused_ordering 00:02:11.054 LINK simple_copy 00:02:11.054 LINK reset 00:02:11.054 LINK aer 00:02:11.054 LINK fdp 00:02:11.054 LINK sgl 00:02:11.054 LINK overhead 00:02:11.054 LINK nvme_dp 00:02:11.054 LINK mkfs 00:02:11.314 LINK nvme_compliance 00:02:11.314 CC examples/accel/perf/accel_perf.o 00:02:11.314 LINK dif 00:02:11.574 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:11.574 CC examples/blob/cli/blobcli.o 00:02:11.574 CC examples/blob/hello_world/hello_blob.o 00:02:11.574 LINK hello_fsdev 00:02:11.574 LINK hello_blob 00:02:11.833 LINK accel_perf 00:02:11.833 LINK cuse 00:02:11.833 LINK blobcli 00:02:12.402 CC examples/bdev/hello_world/hello_bdev.o 00:02:12.402 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.662 LINK hello_bdev 00:02:12.921 LINK bdevperf 00:02:12.921 CC test/bdev/bdevio/bdevio.o 00:02:13.179 LINK bdevio 00:02:14.559 LINK esnap 00:02:14.559 CC examples/nvmf/nvmf/nvmf.o 00:02:14.559 LINK nvmf 00:02:15.938 00:02:15.938 real 0m45.990s 00:02:15.938 user 6m16.812s 00:02:15.938 sys 2m32.557s 00:02:15.938 19:28:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.938 19:28:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:15.938 ************************************ 00:02:15.938 END TEST make 00:02:15.938 ************************************ 00:02:15.938 19:28:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:15.938 19:28:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:15.938 19:28:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:15.938 19:28:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.938 19:28:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:15.938 19:28:51 -- pm/common@44 -- $ pid=993262 00:02:15.938 19:28:51 -- pm/common@50 -- $ kill -TERM 993262 00:02:15.938 19:28:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.938 19:28:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:15.938 19:28:51 -- pm/common@44 -- $ pid=993264 00:02:15.938 19:28:51 -- pm/common@50 -- $ kill -TERM 993264 00:02:15.938 19:28:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.938 19:28:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:15.938 19:28:51 -- pm/common@44 -- $ pid=993266 00:02:15.938 19:28:51 -- pm/common@50 -- $ kill -TERM 993266 00:02:15.938 19:28:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.938 19:28:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:15.938 19:28:51 -- pm/common@44 -- $ pid=993289 00:02:15.938 19:28:51 -- pm/common@50 -- $ sudo -E kill -TERM 993289 00:02:15.938 19:28:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:15.938 19:28:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:15.938 19:28:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:15.938 19:28:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:15.938 19:28:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:16.209 19:28:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:16.209 19:28:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:16.209 19:28:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:16.209 19:28:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:16.209 19:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:02:16.209 19:28:51 -- scripts/common.sh@336 -- # read -ra ver1 00:02:16.209 19:28:51 -- scripts/common.sh@337 -- # IFS=.-: 00:02:16.209 19:28:51 -- scripts/common.sh@337 -- # read -ra ver2 00:02:16.209 19:28:51 -- scripts/common.sh@338 -- # local 'op=<' 00:02:16.209 19:28:51 -- scripts/common.sh@340 -- # ver1_l=2 00:02:16.209 19:28:51 -- scripts/common.sh@341 -- # ver2_l=1 00:02:16.209 19:28:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:16.209 19:28:51 -- scripts/common.sh@344 -- # case "$op" in 00:02:16.209 19:28:51 -- scripts/common.sh@345 -- # : 1 00:02:16.209 19:28:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:16.209 19:28:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.209 19:28:51 -- scripts/common.sh@365 -- # decimal 1 00:02:16.209 19:28:51 -- scripts/common.sh@353 -- # local d=1 00:02:16.209 19:28:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:16.209 19:28:51 -- scripts/common.sh@355 -- # echo 1 00:02:16.209 19:28:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:16.209 19:28:51 -- scripts/common.sh@366 -- # decimal 2 00:02:16.209 19:28:51 -- scripts/common.sh@353 -- # local d=2 00:02:16.209 19:28:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:16.209 19:28:51 -- scripts/common.sh@355 -- # echo 2 00:02:16.209 19:28:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:16.209 19:28:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:16.209 19:28:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:16.209 19:28:51 -- scripts/common.sh@368 -- # return 0 00:02:16.209 19:28:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:16.209 19:28:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.209 --rc genhtml_branch_coverage=1 00:02:16.209 --rc genhtml_function_coverage=1 00:02:16.209 --rc genhtml_legend=1 00:02:16.209 --rc geninfo_all_blocks=1 00:02:16.209 --rc geninfo_unexecuted_blocks=1 00:02:16.209 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:16.209 ' 00:02:16.209 19:28:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.209 --rc genhtml_branch_coverage=1 00:02:16.209 --rc genhtml_function_coverage=1 00:02:16.209 --rc genhtml_legend=1 00:02:16.209 --rc geninfo_all_blocks=1 00:02:16.209 --rc geninfo_unexecuted_blocks=1 00:02:16.209 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:16.209 ' 00:02:16.209 19:28:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.209 --rc genhtml_branch_coverage=1 00:02:16.209 --rc genhtml_function_coverage=1 00:02:16.209 --rc genhtml_legend=1 00:02:16.209 --rc geninfo_all_blocks=1 00:02:16.209 --rc geninfo_unexecuted_blocks=1 00:02:16.209 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:16.209 ' 00:02:16.209 19:28:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.209 --rc genhtml_branch_coverage=1 00:02:16.209 --rc genhtml_function_coverage=1 00:02:16.209 --rc genhtml_legend=1 00:02:16.209 --rc geninfo_all_blocks=1 00:02:16.209 --rc geninfo_unexecuted_blocks=1 00:02:16.209 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:16.209 ' 00:02:16.209 19:28:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:16.209 19:28:51 -- nvmf/common.sh@7 -- # uname -s 00:02:16.209 19:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:16.209 19:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:16.209 19:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:16.209 19:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:16.209 19:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:16.209 19:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:16.209 19:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:16.209 19:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:16.209 19:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:16.209 19:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:16.209 19:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:16.209 19:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:16.209 19:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:16.209 19:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:16.209 19:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:16.209 19:28:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:16.209 19:28:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:16.209 19:28:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:16.209 19:28:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:16.209 19:28:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.209 19:28:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.209 19:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.209 19:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.209 19:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.209 19:28:51 -- paths/export.sh@5 -- # export PATH 00:02:16.209 19:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.209 19:28:51 -- nvmf/common.sh@51 -- # : 0 00:02:16.209 19:28:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:16.209 19:28:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:16.209 19:28:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:16.209 19:28:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:16.209 19:28:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:16.209 19:28:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:16.209 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:16.209 19:28:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:16.209 19:28:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:16.209 19:28:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:16.209 19:28:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:16.209 19:28:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:16.209 19:28:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:16.209 19:28:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:16.209 19:28:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.209 19:28:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:16.209 19:28:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.209 19:28:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:16.209 19:28:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:16.209 19:28:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:16.210 19:28:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1056515 00:02:16.210 19:28:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:16.210 19:28:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:16.210 19:28:51 -- pm/common@17 -- # local monitor 00:02:16.210 19:28:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.210 19:28:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.210 19:28:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.210 19:28:51 -- pm/common@21 -- # date +%s 00:02:16.210 19:28:51 -- pm/common@21 -- # date +%s 00:02:16.210 19:28:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.210 19:28:51 -- pm/common@25 -- # sleep 1 00:02:16.210 19:28:51 -- pm/common@21 -- # date +%s 00:02:16.210 19:28:51 -- pm/common@21 -- # date +%s 00:02:16.210 19:28:51 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732645731 00:02:16.210 19:28:51 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732645731 00:02:16.210 19:28:51 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732645731 00:02:16.210 19:28:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732645731 00:02:16.210 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732645731_collect-cpu-load.pm.log 00:02:16.210 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732645731_collect-vmstat.pm.log 00:02:16.210 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732645731_collect-cpu-temp.pm.log 00:02:16.210 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732645731_collect-bmc-pm.bmc.pm.log 00:02:17.144 19:28:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:17.144 19:28:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:17.144 19:28:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:17.144 19:28:52 -- common/autotest_common.sh@10 -- # set +x 00:02:17.144 19:28:52 -- spdk/autotest.sh@59 -- # create_test_list 00:02:17.144 19:28:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:17.144 19:28:52 -- common/autotest_common.sh@10 -- # set +x 00:02:17.402 19:28:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:17.402 19:28:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.402 19:28:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.402 19:28:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:17.402 19:28:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.402 19:28:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:17.402 19:28:52 -- common/autotest_common.sh@1457 -- # uname 00:02:17.402 19:28:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:17.402 19:28:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:17.402 19:28:52 -- common/autotest_common.sh@1477 -- # uname 00:02:17.402 19:28:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:17.402 19:28:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:17.402 19:28:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:17.402 lcov: LCOV version 1.15 00:02:17.402 19:28:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:22.674 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:02:28.024 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.295 19:29:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:33.295 19:29:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:33.295 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:02:33.295 19:29:08 -- spdk/autotest.sh@78 -- # rm -f 00:02:33.295 19:29:08 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.585 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:36.585 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:36.844 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:36.844 19:29:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:36.844 19:29:11 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:36.844 19:29:11 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:36.844 19:29:11 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:36.844 19:29:11 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:36.844 19:29:11 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:36.844 19:29:11 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:36.844 19:29:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:36.844 19:29:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:36.844 19:29:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:36.844 19:29:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:36.844 19:29:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:36.844 19:29:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:36.844 19:29:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:36.844 19:29:11 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:36.844 No valid GPT data, bailing 00:02:36.844 19:29:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:36.844 19:29:12 -- scripts/common.sh@394 -- # pt= 00:02:36.844 19:29:12 -- scripts/common.sh@395 -- # return 1 00:02:36.844 19:29:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:36.844 1+0 records in 00:02:36.844 1+0 records out 00:02:36.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466644 s, 225 MB/s 00:02:36.844 19:29:12 -- spdk/autotest.sh@105 -- # sync 00:02:36.844 19:29:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:36.844 19:29:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:36.844 19:29:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:44.969 19:29:19 -- spdk/autotest.sh@111 -- # uname -s 00:02:44.969 19:29:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:44.969 19:29:19 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:02:44.969 19:29:19 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:44.969 19:29:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:44.969 19:29:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:44.969 19:29:19 -- common/autotest_common.sh@10 -- # set +x 00:02:44.969 ************************************ 00:02:44.969 START TEST setup.sh 00:02:44.969 ************************************ 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:44.969 * Looking for test storage... 00:02:44.969 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@345 -- # : 1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@353 -- # local d=1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@355 -- # echo 1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@353 -- # local d=2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@355 -- # echo 2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:44.969 19:29:19 setup.sh -- scripts/common.sh@368 -- # return 0 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:44.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.969 --rc genhtml_branch_coverage=1 00:02:44.969 --rc genhtml_function_coverage=1 00:02:44.969 --rc genhtml_legend=1 00:02:44.969 --rc geninfo_all_blocks=1 00:02:44.969 --rc geninfo_unexecuted_blocks=1 00:02:44.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.969 ' 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:44.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.969 --rc genhtml_branch_coverage=1 00:02:44.969 --rc genhtml_function_coverage=1 00:02:44.969 --rc genhtml_legend=1 00:02:44.969 --rc geninfo_all_blocks=1 00:02:44.969 --rc geninfo_unexecuted_blocks=1 00:02:44.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.969 ' 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:44.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.969 --rc genhtml_branch_coverage=1 00:02:44.969 --rc genhtml_function_coverage=1 00:02:44.969 --rc genhtml_legend=1 00:02:44.969 --rc geninfo_all_blocks=1 00:02:44.969 --rc geninfo_unexecuted_blocks=1 00:02:44.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.969 ' 00:02:44.969 19:29:19 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:44.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.969 --rc genhtml_branch_coverage=1 00:02:44.969 --rc genhtml_function_coverage=1 00:02:44.969 --rc genhtml_legend=1 00:02:44.969 --rc geninfo_all_blocks=1 00:02:44.969 --rc geninfo_unexecuted_blocks=1 00:02:44.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.969 ' 00:02:44.970 19:29:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:44.970 19:29:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:44.970 19:29:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:44.970 19:29:19 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:44.970 19:29:19 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:44.970 19:29:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.970 ************************************ 00:02:44.970 START TEST acl 00:02:44.970 ************************************ 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:44.970 * Looking for test storage... 00:02:44.970 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:44.970 19:29:19 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:44.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.970 --rc genhtml_branch_coverage=1 00:02:44.970 --rc genhtml_function_coverage=1 00:02:44.970 --rc genhtml_legend=1 00:02:44.970 --rc geninfo_all_blocks=1 00:02:44.970 --rc geninfo_unexecuted_blocks=1 00:02:44.970 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.970 ' 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:44.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.970 --rc genhtml_branch_coverage=1 00:02:44.970 --rc genhtml_function_coverage=1 00:02:44.970 --rc genhtml_legend=1 00:02:44.970 --rc geninfo_all_blocks=1 00:02:44.970 --rc geninfo_unexecuted_blocks=1 00:02:44.970 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.970 ' 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:44.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.970 --rc genhtml_branch_coverage=1 00:02:44.970 --rc genhtml_function_coverage=1 00:02:44.970 --rc genhtml_legend=1 00:02:44.970 --rc geninfo_all_blocks=1 00:02:44.970 --rc geninfo_unexecuted_blocks=1 00:02:44.970 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.970 ' 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:44.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.970 --rc genhtml_branch_coverage=1 00:02:44.970 --rc genhtml_function_coverage=1 00:02:44.970 --rc genhtml_legend=1 00:02:44.970 --rc geninfo_all_blocks=1 00:02:44.970 --rc geninfo_unexecuted_blocks=1 00:02:44.970 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:44.970 ' 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.970 19:29:19 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:44.970 19:29:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:44.970 19:29:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.970 19:29:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.165 19:29:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:49.165 19:29:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:49.165 19:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.165 19:29:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:49.165 19:29:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.165 19:29:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:51.071 Hugepages 00:02:51.071 node hugesize free / total 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.071 00:02:51.071 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.071 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:51.331 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:51.332 19:29:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:51.332 19:29:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:51.332 19:29:26 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:51.332 19:29:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:51.332 ************************************ 00:02:51.332 START TEST denied 00:02:51.332 ************************************ 00:02:51.332 19:29:26 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:02:51.332 19:29:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:51.332 19:29:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:51.332 19:29:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:51.332 19:29:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.332 19:29:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:54.620 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.880 19:29:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.074 00:02:59.074 real 0m7.600s 00:02:59.074 user 0m2.261s 00:02:59.074 sys 0m4.644s 00:02:59.074 19:29:34 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.074 19:29:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:59.074 ************************************ 00:02:59.074 END TEST denied 00:02:59.074 ************************************ 00:02:59.074 19:29:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:59.074 19:29:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:59.074 19:29:34 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:59.074 19:29:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.074 ************************************ 00:02:59.074 START TEST allowed 00:02:59.074 ************************************ 00:02:59.074 19:29:34 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:02:59.074 19:29:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:59.074 19:29:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:59.074 19:29:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:59.074 19:29:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.074 19:29:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:04.347 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:04.347 19:29:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:04.347 19:29:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:04.347 19:29:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:04.347 19:29:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.347 19:29:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.639 00:03:07.639 real 0m8.344s 00:03:07.639 user 0m2.260s 00:03:07.639 sys 0m4.589s 00:03:07.639 19:29:42 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:07.639 19:29:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:07.639 ************************************ 00:03:07.639 END TEST allowed 00:03:07.639 ************************************ 00:03:07.639 00:03:07.639 real 0m22.946s 00:03:07.639 user 0m6.969s 00:03:07.639 sys 0m13.970s 00:03:07.639 19:29:42 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:07.639 19:29:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.639 ************************************ 00:03:07.639 END TEST acl 00:03:07.639 ************************************ 00:03:07.639 19:29:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.639 19:29:42 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:07.639 19:29:42 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:07.639 19:29:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.639 ************************************ 00:03:07.639 START TEST hugepages 00:03:07.639 ************************************ 00:03:07.639 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.639 * Looking for test storage... 00:03:07.639 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:07.639 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:07.639 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:03:07.639 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:07.639 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:07.639 19:29:42 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:07.899 19:29:42 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:07.899 19:29:42 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:07.899 19:29:42 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:07.899 19:29:42 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:07.899 19:29:42 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:07.899 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:07.899 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.899 --rc genhtml_branch_coverage=1 00:03:07.899 --rc genhtml_function_coverage=1 00:03:07.899 --rc genhtml_legend=1 00:03:07.899 --rc geninfo_all_blocks=1 00:03:07.899 --rc geninfo_unexecuted_blocks=1 00:03:07.899 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.899 ' 00:03:07.899 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.899 --rc genhtml_branch_coverage=1 00:03:07.899 --rc genhtml_function_coverage=1 00:03:07.899 --rc genhtml_legend=1 00:03:07.899 --rc geninfo_all_blocks=1 00:03:07.899 --rc geninfo_unexecuted_blocks=1 00:03:07.899 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.899 ' 00:03:07.899 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.899 --rc genhtml_branch_coverage=1 00:03:07.899 --rc genhtml_function_coverage=1 00:03:07.899 --rc genhtml_legend=1 00:03:07.899 --rc geninfo_all_blocks=1 00:03:07.899 --rc geninfo_unexecuted_blocks=1 00:03:07.899 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.899 ' 00:03:07.899 19:29:42 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.899 --rc genhtml_branch_coverage=1 00:03:07.899 --rc genhtml_function_coverage=1 00:03:07.899 --rc genhtml_legend=1 00:03:07.899 --rc geninfo_all_blocks=1 00:03:07.899 --rc geninfo_unexecuted_blocks=1 00:03:07.899 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.899 ' 00:03:07.899 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:07.899 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:07.899 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:07.899 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41318512 kB' 'MemAvailable: 42812944 kB' 'Buffers: 2708 kB' 'Cached: 10147952 kB' 'SwapCached: 72 kB' 'Active: 6644464 kB' 'Inactive: 4118284 kB' 'Active(anon): 5764576 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615412 kB' 'Mapped: 171596 kB' 'Shmem: 8567912 kB' 'KReclaimable: 559332 kB' 'Slab: 1546184 kB' 'SReclaimable: 559332 kB' 'SUnreclaim: 986852 kB' 'KernelStack: 21984 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433348 kB' 'Committed_AS: 10476372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.900 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.901 19:29:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:07.901 19:29:43 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:07.901 19:29:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:07.901 19:29:43 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:07.901 19:29:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.901 ************************************ 00:03:07.901 START TEST single_node_setup 00:03:07.901 ************************************ 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.901 19:29:43 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:11.183 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.183 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.563 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43499216 kB' 'MemAvailable: 44993584 kB' 'Buffers: 2708 kB' 'Cached: 10148088 kB' 'SwapCached: 72 kB' 'Active: 6645548 kB' 'Inactive: 4118284 kB' 'Active(anon): 5765660 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616276 kB' 'Mapped: 171360 kB' 'Shmem: 8568048 kB' 'KReclaimable: 559268 kB' 'Slab: 1544808 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985540 kB' 'KernelStack: 22016 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10475656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.563 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.564 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43501096 kB' 'MemAvailable: 44995464 kB' 'Buffers: 2708 kB' 'Cached: 10148092 kB' 'SwapCached: 72 kB' 'Active: 6646176 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766288 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616952 kB' 'Mapped: 171364 kB' 'Shmem: 8568052 kB' 'KReclaimable: 559268 kB' 'Slab: 1544764 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985496 kB' 'KernelStack: 22176 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10477180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.565 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.566 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43498976 kB' 'MemAvailable: 44993344 kB' 'Buffers: 2708 kB' 'Cached: 10148108 kB' 'SwapCached: 72 kB' 'Active: 6648284 kB' 'Inactive: 4118284 kB' 'Active(anon): 5768396 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619012 kB' 'Mapped: 171812 kB' 'Shmem: 8568068 kB' 'KReclaimable: 559268 kB' 'Slab: 1544820 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985552 kB' 'KernelStack: 22080 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10480272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.567 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.568 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.569 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:12.570 nr_hugepages=1024 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:12.570 resv_hugepages=0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:12.570 surplus_hugepages=0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:12.570 anon_hugepages=0 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43496804 kB' 'MemAvailable: 44991172 kB' 'Buffers: 2708 kB' 'Cached: 10148108 kB' 'SwapCached: 72 kB' 'Active: 6652012 kB' 'Inactive: 4118284 kB' 'Active(anon): 5772124 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623164 kB' 'Mapped: 171808 kB' 'Shmem: 8568068 kB' 'KReclaimable: 559268 kB' 'Slab: 1544820 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985552 kB' 'KernelStack: 22128 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10483344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218232 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.570 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23593860 kB' 'MemUsed: 9040576 kB' 'SwapCached: 40 kB' 'Active: 4635672 kB' 'Inactive: 389304 kB' 'Active(anon): 3858244 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787224 kB' 'Mapped: 103312 kB' 'AnonPages: 240908 kB' 'Shmem: 3620500 kB' 'KernelStack: 10792 kB' 'PageTables: 6200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862092 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 475444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:12.574 node0=1024 expecting 1024 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.574 00:03:12.574 real 0m4.783s 00:03:12.574 user 0m1.071s 00:03:12.574 sys 0m2.129s 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:12.574 19:29:47 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:12.574 ************************************ 00:03:12.574 END TEST single_node_setup 00:03:12.574 ************************************ 00:03:12.833 19:29:47 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:12.833 19:29:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:12.833 19:29:47 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:12.833 19:29:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.833 ************************************ 00:03:12.833 START TEST even_2G_alloc 00:03:12.833 ************************************ 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.833 19:29:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:16.123 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.123 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.124 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.124 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.124 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43475004 kB' 'MemAvailable: 44969372 kB' 'Buffers: 2708 kB' 'Cached: 10148268 kB' 'SwapCached: 72 kB' 'Active: 6644480 kB' 'Inactive: 4118284 kB' 'Active(anon): 5764592 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615068 kB' 'Mapped: 170276 kB' 'Shmem: 8568228 kB' 'KReclaimable: 559268 kB' 'Slab: 1544928 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985660 kB' 'KernelStack: 21936 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10467980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.124 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.125 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43476856 kB' 'MemAvailable: 44971224 kB' 'Buffers: 2708 kB' 'Cached: 10148272 kB' 'SwapCached: 72 kB' 'Active: 6644184 kB' 'Inactive: 4118284 kB' 'Active(anon): 5764296 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614804 kB' 'Mapped: 170152 kB' 'Shmem: 8568232 kB' 'KReclaimable: 559268 kB' 'Slab: 1544848 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985580 kB' 'KernelStack: 21920 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10468000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.127 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43476100 kB' 'MemAvailable: 44970468 kB' 'Buffers: 2708 kB' 'Cached: 10148272 kB' 'SwapCached: 72 kB' 'Active: 6644184 kB' 'Inactive: 4118284 kB' 'Active(anon): 5764296 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614804 kB' 'Mapped: 170152 kB' 'Shmem: 8568232 kB' 'KReclaimable: 559268 kB' 'Slab: 1544848 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985580 kB' 'KernelStack: 21920 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10468020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:16.129 nr_hugepages=1024 00:03:16.129 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:16.129 resv_hugepages=0 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:16.130 surplus_hugepages=0 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:16.130 anon_hugepages=0 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.130 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43476708 kB' 'MemAvailable: 44971076 kB' 'Buffers: 2708 kB' 'Cached: 10148272 kB' 'SwapCached: 72 kB' 'Active: 6643956 kB' 'Inactive: 4118284 kB' 'Active(anon): 5764068 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614576 kB' 'Mapped: 170152 kB' 'Shmem: 8568232 kB' 'KReclaimable: 559268 kB' 'Slab: 1544848 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985580 kB' 'KernelStack: 21888 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10468040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.393 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.394 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24622864 kB' 'MemUsed: 8011572 kB' 'SwapCached: 40 kB' 'Active: 4634148 kB' 'Inactive: 389304 kB' 'Active(anon): 3856720 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787320 kB' 'Mapped: 102008 kB' 'AnonPages: 239328 kB' 'Shmem: 3620596 kB' 'KernelStack: 10360 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862300 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 475652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.395 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18853968 kB' 'MemUsed: 8795392 kB' 'SwapCached: 32 kB' 'Active: 2010192 kB' 'Inactive: 3728980 kB' 'Active(anon): 1907732 kB' 'Inactive(anon): 3415376 kB' 'Active(file): 102460 kB' 'Inactive(file): 313604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5363792 kB' 'Mapped: 68144 kB' 'AnonPages: 375604 kB' 'Shmem: 4947696 kB' 'KernelStack: 11544 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 172620 kB' 'Slab: 682548 kB' 'SReclaimable: 172620 kB' 'SUnreclaim: 509928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.396 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.397 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:16.398 node0=512 expecting 512 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:16.398 node1=512 expecting 512 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:16.398 00:03:16.398 real 0m3.600s 00:03:16.398 user 0m1.318s 00:03:16.398 sys 0m2.339s 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.398 19:29:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.398 ************************************ 00:03:16.398 END TEST even_2G_alloc 00:03:16.398 ************************************ 00:03:16.398 19:29:51 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:16.398 19:29:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.398 19:29:51 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.398 19:29:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.398 ************************************ 00:03:16.398 START TEST odd_alloc 00:03:16.398 ************************************ 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.398 19:29:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.687 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.687 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43480492 kB' 'MemAvailable: 44974860 kB' 'Buffers: 2708 kB' 'Cached: 10148436 kB' 'SwapCached: 72 kB' 'Active: 6645888 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766000 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615784 kB' 'Mapped: 170272 kB' 'Shmem: 8568396 kB' 'KReclaimable: 559268 kB' 'Slab: 1545192 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985924 kB' 'KernelStack: 21952 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10468664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.687 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43481188 kB' 'MemAvailable: 44975556 kB' 'Buffers: 2708 kB' 'Cached: 10148440 kB' 'SwapCached: 72 kB' 'Active: 6645212 kB' 'Inactive: 4118284 kB' 'Active(anon): 5765324 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615636 kB' 'Mapped: 170164 kB' 'Shmem: 8568400 kB' 'KReclaimable: 559268 kB' 'Slab: 1545136 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985868 kB' 'KernelStack: 21904 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10468680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.688 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.689 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43481748 kB' 'MemAvailable: 44976116 kB' 'Buffers: 2708 kB' 'Cached: 10148456 kB' 'SwapCached: 72 kB' 'Active: 6644980 kB' 'Inactive: 4118284 kB' 'Active(anon): 5765092 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615284 kB' 'Mapped: 170164 kB' 'Shmem: 8568416 kB' 'KReclaimable: 559268 kB' 'Slab: 1545136 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985868 kB' 'KernelStack: 21904 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10468700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:19.690 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.953 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.954 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:19.955 nr_hugepages=1025 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:19.955 resv_hugepages=0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:19.955 surplus_hugepages=0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:19.955 anon_hugepages=0 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43483116 kB' 'MemAvailable: 44977484 kB' 'Buffers: 2708 kB' 'Cached: 10148476 kB' 'SwapCached: 72 kB' 'Active: 6644948 kB' 'Inactive: 4118284 kB' 'Active(anon): 5765060 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615280 kB' 'Mapped: 170164 kB' 'Shmem: 8568436 kB' 'KReclaimable: 559268 kB' 'Slab: 1545136 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 985868 kB' 'KernelStack: 21904 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10468724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.955 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.956 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24622992 kB' 'MemUsed: 8011444 kB' 'SwapCached: 40 kB' 'Active: 4633384 kB' 'Inactive: 389304 kB' 'Active(anon): 3855956 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787436 kB' 'Mapped: 102020 kB' 'AnonPages: 238448 kB' 'Shmem: 3620712 kB' 'KernelStack: 10360 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862544 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 475896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.957 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.958 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18860628 kB' 'MemUsed: 8788732 kB' 'SwapCached: 32 kB' 'Active: 2011564 kB' 'Inactive: 3728980 kB' 'Active(anon): 1909104 kB' 'Inactive(anon): 3415376 kB' 'Active(file): 102460 kB' 'Inactive(file): 313604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5363820 kB' 'Mapped: 68144 kB' 'AnonPages: 376832 kB' 'Shmem: 4947724 kB' 'KernelStack: 11544 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 172620 kB' 'Slab: 682592 kB' 'SReclaimable: 172620 kB' 'SUnreclaim: 509972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.959 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:19.960 node0=513 expecting 513 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:19.960 node1=512 expecting 512 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:19.960 00:03:19.960 real 0m3.519s 00:03:19.960 user 0m1.369s 00:03:19.960 sys 0m2.208s 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.960 19:29:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.960 ************************************ 00:03:19.960 END TEST odd_alloc 00:03:19.960 ************************************ 00:03:19.960 19:29:55 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:19.960 19:29:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.960 19:29:55 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.960 19:29:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.960 ************************************ 00:03:19.960 START TEST custom_alloc 00:03:19.960 ************************************ 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:19.960 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:19.961 19:29:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:19.961 19:29:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.961 19:29:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:23.245 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.246 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42451756 kB' 'MemAvailable: 43946124 kB' 'Buffers: 2708 kB' 'Cached: 10148608 kB' 'SwapCached: 72 kB' 'Active: 6646524 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766636 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616604 kB' 'Mapped: 170180 kB' 'Shmem: 8568568 kB' 'KReclaimable: 559268 kB' 'Slab: 1545300 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 986032 kB' 'KernelStack: 21984 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10472116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218260 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.510 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.511 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42453100 kB' 'MemAvailable: 43947468 kB' 'Buffers: 2708 kB' 'Cached: 10148608 kB' 'SwapCached: 72 kB' 'Active: 6646180 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766292 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616212 kB' 'Mapped: 170180 kB' 'Shmem: 8568568 kB' 'KReclaimable: 559268 kB' 'Slab: 1545308 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 986040 kB' 'KernelStack: 21776 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10470628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218228 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.512 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.513 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42450792 kB' 'MemAvailable: 43945160 kB' 'Buffers: 2708 kB' 'Cached: 10148628 kB' 'SwapCached: 72 kB' 'Active: 6646384 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766496 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616412 kB' 'Mapped: 170180 kB' 'Shmem: 8568588 kB' 'KReclaimable: 559268 kB' 'Slab: 1545308 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 986040 kB' 'KernelStack: 21984 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10470652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218276 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.514 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.515 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:23.516 nr_hugepages=1536 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:23.516 resv_hugepages=0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:23.516 surplus_hugepages=0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:23.516 anon_hugepages=0 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42449032 kB' 'MemAvailable: 43943400 kB' 'Buffers: 2708 kB' 'Cached: 10148628 kB' 'SwapCached: 72 kB' 'Active: 6646748 kB' 'Inactive: 4118284 kB' 'Active(anon): 5766860 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616788 kB' 'Mapped: 170176 kB' 'Shmem: 8568588 kB' 'KReclaimable: 559268 kB' 'Slab: 1545308 kB' 'SReclaimable: 559268 kB' 'SUnreclaim: 986040 kB' 'KernelStack: 22112 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10472176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218276 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.516 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:23.517 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24630196 kB' 'MemUsed: 8004240 kB' 'SwapCached: 40 kB' 'Active: 4633820 kB' 'Inactive: 389304 kB' 'Active(anon): 3856392 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787592 kB' 'Mapped: 102036 kB' 'AnonPages: 238724 kB' 'Shmem: 3620868 kB' 'KernelStack: 10344 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862752 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 476104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.518 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 17818616 kB' 'MemUsed: 9830744 kB' 'SwapCached: 32 kB' 'Active: 2012856 kB' 'Inactive: 3728980 kB' 'Active(anon): 1910396 kB' 'Inactive(anon): 3415376 kB' 'Active(file): 102460 kB' 'Inactive(file): 313604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5363860 kB' 'Mapped: 68144 kB' 'AnonPages: 378004 kB' 'Shmem: 4947764 kB' 'KernelStack: 11608 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 172620 kB' 'Slab: 682556 kB' 'SReclaimable: 172620 kB' 'SUnreclaim: 509936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.519 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:23.520 node0=512 expecting 512 00:03:23.520 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:23.521 node1=1024 expecting 1024 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:23.521 00:03:23.521 real 0m3.580s 00:03:23.521 user 0m1.377s 00:03:23.521 sys 0m2.273s 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:23.521 19:29:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.521 ************************************ 00:03:23.521 END TEST custom_alloc 00:03:23.521 ************************************ 00:03:23.779 19:29:58 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:23.779 19:29:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.779 19:29:58 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.779 19:29:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.779 ************************************ 00:03:23.779 START TEST no_shrink_alloc 00:03:23.779 ************************************ 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.779 19:29:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:27.082 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.082 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43417488 kB' 'MemAvailable: 44911848 kB' 'Buffers: 2708 kB' 'Cached: 10148776 kB' 'SwapCached: 72 kB' 'Active: 6656440 kB' 'Inactive: 4118284 kB' 'Active(anon): 5776552 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625976 kB' 'Mapped: 171164 kB' 'Shmem: 8568736 kB' 'KReclaimable: 559260 kB' 'Slab: 1545100 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985840 kB' 'KernelStack: 22000 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218296 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.082 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.083 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43418004 kB' 'MemAvailable: 44912364 kB' 'Buffers: 2708 kB' 'Cached: 10148780 kB' 'SwapCached: 72 kB' 'Active: 6656136 kB' 'Inactive: 4118284 kB' 'Active(anon): 5776248 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626200 kB' 'Mapped: 171052 kB' 'Shmem: 8568740 kB' 'KReclaimable: 559260 kB' 'Slab: 1545076 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985816 kB' 'KernelStack: 22000 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218296 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.084 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.085 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43417524 kB' 'MemAvailable: 44911884 kB' 'Buffers: 2708 kB' 'Cached: 10148800 kB' 'SwapCached: 72 kB' 'Active: 6656160 kB' 'Inactive: 4118284 kB' 'Active(anon): 5776272 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626204 kB' 'Mapped: 171052 kB' 'Shmem: 8568760 kB' 'KReclaimable: 559260 kB' 'Slab: 1545076 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985816 kB' 'KernelStack: 22000 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218296 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.086 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.087 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:27.088 nr_hugepages=1024 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:27.088 resv_hugepages=0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:27.088 surplus_hugepages=0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:27.088 anon_hugepages=0 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43417524 kB' 'MemAvailable: 44911884 kB' 'Buffers: 2708 kB' 'Cached: 10148840 kB' 'SwapCached: 72 kB' 'Active: 6655828 kB' 'Inactive: 4118284 kB' 'Active(anon): 5775940 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625796 kB' 'Mapped: 171052 kB' 'Shmem: 8568800 kB' 'KReclaimable: 559260 kB' 'Slab: 1545076 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985816 kB' 'KernelStack: 21984 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218296 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.088 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.089 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23578372 kB' 'MemUsed: 9056064 kB' 'SwapCached: 40 kB' 'Active: 4635748 kB' 'Inactive: 389304 kB' 'Active(anon): 3858320 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787728 kB' 'Mapped: 102756 kB' 'AnonPages: 240564 kB' 'Shmem: 3621004 kB' 'KernelStack: 10392 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862672 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 476024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.090 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:27.091 node0=1024 expecting 1024 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.091 19:30:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:30.381 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.381 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.644 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.644 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43376776 kB' 'MemAvailable: 44871136 kB' 'Buffers: 2708 kB' 'Cached: 10148932 kB' 'SwapCached: 72 kB' 'Active: 6653488 kB' 'Inactive: 4118284 kB' 'Active(anon): 5773600 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623348 kB' 'Mapped: 171296 kB' 'Shmem: 8568892 kB' 'KReclaimable: 559260 kB' 'Slab: 1545072 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985812 kB' 'KernelStack: 22128 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10510400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218356 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.644 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.645 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43379376 kB' 'MemAvailable: 44873736 kB' 'Buffers: 2708 kB' 'Cached: 10148936 kB' 'SwapCached: 72 kB' 'Active: 6653120 kB' 'Inactive: 4118284 kB' 'Active(anon): 5773232 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622996 kB' 'Mapped: 171624 kB' 'Shmem: 8568896 kB' 'KReclaimable: 559260 kB' 'Slab: 1545068 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985808 kB' 'KernelStack: 22048 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10507804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218324 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.646 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.647 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.648 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43379984 kB' 'MemAvailable: 44874344 kB' 'Buffers: 2708 kB' 'Cached: 10148952 kB' 'SwapCached: 72 kB' 'Active: 6652204 kB' 'Inactive: 4118284 kB' 'Active(anon): 5772316 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621948 kB' 'Mapped: 171156 kB' 'Shmem: 8568912 kB' 'KReclaimable: 559260 kB' 'Slab: 1545068 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985808 kB' 'KernelStack: 22048 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10506776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218308 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.649 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:30.651 nr_hugepages=1024 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:30.651 resv_hugepages=0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:30.651 surplus_hugepages=0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:30.651 anon_hugepages=0 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.651 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43382032 kB' 'MemAvailable: 44876392 kB' 'Buffers: 2708 kB' 'Cached: 10148976 kB' 'SwapCached: 72 kB' 'Active: 6651120 kB' 'Inactive: 4118284 kB' 'Active(anon): 5771232 kB' 'Inactive(anon): 3415424 kB' 'Active(file): 879888 kB' 'Inactive(file): 702860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620864 kB' 'Mapped: 171156 kB' 'Shmem: 8568936 kB' 'KReclaimable: 559260 kB' 'Slab: 1545060 kB' 'SReclaimable: 559260 kB' 'SUnreclaim: 985800 kB' 'KernelStack: 22032 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10506680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218292 kB' 'VmallocChunk: 0 kB' 'Percpu: 110656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.652 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23527632 kB' 'MemUsed: 9106804 kB' 'SwapCached: 40 kB' 'Active: 4635588 kB' 'Inactive: 389304 kB' 'Active(anon): 3858160 kB' 'Inactive(anon): 48 kB' 'Active(file): 777428 kB' 'Inactive(file): 389256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4787844 kB' 'Mapped: 102952 kB' 'AnonPages: 240168 kB' 'Shmem: 3621120 kB' 'KernelStack: 10376 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 386648 kB' 'Slab: 862536 kB' 'SReclaimable: 386648 kB' 'SUnreclaim: 475888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:30.656 node0=1024 expecting 1024 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.656 00:03:30.656 real 0m7.076s 00:03:30.656 user 0m2.633s 00:03:30.656 sys 0m4.572s 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.656 19:30:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.656 ************************************ 00:03:30.656 END TEST no_shrink_alloc 00:03:30.656 ************************************ 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:30.915 19:30:05 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:30.915 00:03:30.915 real 0m23.245s 00:03:30.915 user 0m8.078s 00:03:30.915 sys 0m13.950s 00:03:30.915 19:30:05 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.915 19:30:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.915 ************************************ 00:03:30.915 END TEST hugepages 00:03:30.915 ************************************ 00:03:30.915 19:30:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:30.915 19:30:06 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.915 19:30:06 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.915 19:30:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.915 ************************************ 00:03:30.915 START TEST driver 00:03:30.915 ************************************ 00:03:30.915 19:30:06 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:30.915 * Looking for test storage... 00:03:30.915 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:30.915 19:30:06 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:30.915 19:30:06 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:03:30.915 19:30:06 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.213 19:30:06 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.213 --rc genhtml_branch_coverage=1 00:03:31.213 --rc genhtml_function_coverage=1 00:03:31.213 --rc genhtml_legend=1 00:03:31.213 --rc geninfo_all_blocks=1 00:03:31.213 --rc geninfo_unexecuted_blocks=1 00:03:31.213 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.213 ' 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.213 --rc genhtml_branch_coverage=1 00:03:31.213 --rc genhtml_function_coverage=1 00:03:31.213 --rc genhtml_legend=1 00:03:31.213 --rc geninfo_all_blocks=1 00:03:31.213 --rc geninfo_unexecuted_blocks=1 00:03:31.213 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.213 ' 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.213 --rc genhtml_branch_coverage=1 00:03:31.213 --rc genhtml_function_coverage=1 00:03:31.213 --rc genhtml_legend=1 00:03:31.213 --rc geninfo_all_blocks=1 00:03:31.213 --rc geninfo_unexecuted_blocks=1 00:03:31.213 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.213 ' 00:03:31.213 19:30:06 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.213 --rc genhtml_branch_coverage=1 00:03:31.213 --rc genhtml_function_coverage=1 00:03:31.213 --rc genhtml_legend=1 00:03:31.213 --rc geninfo_all_blocks=1 00:03:31.213 --rc geninfo_unexecuted_blocks=1 00:03:31.213 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.213 ' 00:03:31.213 19:30:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.213 19:30:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.213 19:30:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.487 19:30:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:35.487 19:30:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.487 19:30:10 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.487 19:30:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:35.487 ************************************ 00:03:35.487 START TEST guess_driver 00:03:35.487 ************************************ 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:35.487 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:35.487 Looking for driver=vfio-pci 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.487 19:30:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.768 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.026 19:30:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.927 19:30:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.196 00:03:46.196 real 0m9.762s 00:03:46.196 user 0m2.542s 00:03:46.196 sys 0m4.900s 00:03:46.196 19:30:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.196 19:30:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.196 ************************************ 00:03:46.196 END TEST guess_driver 00:03:46.196 ************************************ 00:03:46.196 00:03:46.196 real 0m14.495s 00:03:46.196 user 0m3.793s 00:03:46.196 sys 0m7.570s 00:03:46.196 19:30:20 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.196 19:30:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.196 ************************************ 00:03:46.196 END TEST driver 00:03:46.196 ************************************ 00:03:46.196 19:30:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:46.196 19:30:20 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.196 19:30:20 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.196 19:30:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.196 ************************************ 00:03:46.196 START TEST devices 00:03:46.196 ************************************ 00:03:46.196 19:30:20 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:46.196 * Looking for test storage... 00:03:46.196 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:46.196 19:30:20 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:46.196 19:30:20 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.197 19:30:20 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.197 --rc genhtml_branch_coverage=1 00:03:46.197 --rc genhtml_function_coverage=1 00:03:46.197 --rc genhtml_legend=1 00:03:46.197 --rc geninfo_all_blocks=1 00:03:46.197 --rc geninfo_unexecuted_blocks=1 00:03:46.197 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:46.197 ' 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.197 --rc genhtml_branch_coverage=1 00:03:46.197 --rc genhtml_function_coverage=1 00:03:46.197 --rc genhtml_legend=1 00:03:46.197 --rc geninfo_all_blocks=1 00:03:46.197 --rc geninfo_unexecuted_blocks=1 00:03:46.197 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:46.197 ' 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.197 --rc genhtml_branch_coverage=1 00:03:46.197 --rc genhtml_function_coverage=1 00:03:46.197 --rc genhtml_legend=1 00:03:46.197 --rc geninfo_all_blocks=1 00:03:46.197 --rc geninfo_unexecuted_blocks=1 00:03:46.197 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:46.197 ' 00:03:46.197 19:30:20 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.197 --rc genhtml_branch_coverage=1 00:03:46.197 --rc genhtml_function_coverage=1 00:03:46.197 --rc genhtml_legend=1 00:03:46.197 --rc geninfo_all_blocks=1 00:03:46.197 --rc geninfo_unexecuted_blocks=1 00:03:46.197 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:46.197 ' 00:03:46.197 19:30:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.197 19:30:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.197 19:30:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.197 19:30:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:03:49.478 19:30:24 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.478 No valid GPT data, bailing 00:03:49.478 19:30:24 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:03:49.478 19:30:24 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.478 19:30:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.478 19:30:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.478 19:30:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.479 19:30:24 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.479 19:30:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.479 ************************************ 00:03:49.479 START TEST nvme_mount 00:03:49.479 ************************************ 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.479 19:30:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.046 Creating new GPT entries in memory. 00:03:50.046 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.046 other utilities. 00:03:50.046 19:30:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.046 19:30:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.046 19:30:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.046 19:30:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.046 19:30:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.424 Creating new GPT entries in memory. 00:03:51.424 The operation has completed successfully. 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1088958 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.424 19:30:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.714 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.715 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.715 19:30:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.715 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.715 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:54.715 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:54.715 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.981 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.982 19:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.268 19:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.807 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.067 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.067 00:04:01.067 real 0m12.052s 00:04:01.067 user 0m3.398s 00:04:01.067 sys 0m6.540s 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.067 19:30:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.067 ************************************ 00:04:01.067 END TEST nvme_mount 00:04:01.067 ************************************ 00:04:01.067 19:30:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.067 19:30:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.067 19:30:36 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.067 19:30:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.326 ************************************ 00:04:01.326 START TEST dm_mount 00:04:01.326 ************************************ 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.326 19:30:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:02.263 Creating new GPT entries in memory. 00:04:02.263 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.263 other utilities. 00:04:02.263 19:30:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.263 19:30:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.263 19:30:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.263 19:30:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.263 19:30:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.199 Creating new GPT entries in memory. 00:04:03.199 The operation has completed successfully. 00:04:03.199 19:30:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.199 19:30:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.199 19:30:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.199 19:30:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.199 19:30:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:04.137 The operation has completed successfully. 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1093156 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.396 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.397 19:30:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.688 19:30:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.978 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.978 00:04:10.978 real 0m9.590s 00:04:10.978 user 0m2.168s 00:04:10.978 sys 0m4.463s 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.978 19:30:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.978 ************************************ 00:04:10.978 END TEST dm_mount 00:04:10.978 ************************************ 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.978 19:30:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.238 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:11.238 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:11.238 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:11.238 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.238 19:30:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:11.238 00:04:11.238 real 0m25.655s 00:04:11.238 user 0m6.836s 00:04:11.238 sys 0m13.611s 00:04:11.238 19:30:46 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.238 19:30:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:11.238 ************************************ 00:04:11.238 END TEST devices 00:04:11.238 ************************************ 00:04:11.238 00:04:11.238 real 1m26.878s 00:04:11.238 user 0m25.921s 00:04:11.238 sys 0m49.428s 00:04:11.238 19:30:46 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.238 19:30:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.238 ************************************ 00:04:11.238 END TEST setup.sh 00:04:11.238 ************************************ 00:04:11.238 19:30:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:14.526 Hugepages 00:04:14.526 node hugesize free / total 00:04:14.526 node0 1048576kB 0 / 0 00:04:14.526 node0 2048kB 1024 / 1024 00:04:14.526 node1 1048576kB 0 / 0 00:04:14.526 node1 2048kB 1024 / 1024 00:04:14.526 00:04:14.526 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.526 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:14.526 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:14.526 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:14.526 19:30:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:14.526 19:30:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:14.526 19:30:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:14.526 19:30:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.812 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.812 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.717 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.717 19:30:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:20.650 19:30:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:20.650 19:30:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:20.650 19:30:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.650 19:30:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:20.650 19:30:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.650 19:30:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.650 19:30:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.650 19:30:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.650 19:30:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.650 19:30:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:20.650 19:30:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:20.650 19:30:55 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.932 Waiting for block devices as requested 00:04:23.932 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.932 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.932 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.932 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.190 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.190 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.190 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.190 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.448 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.448 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.448 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.707 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.707 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.707 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.965 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.965 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.965 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:25.223 19:31:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:25.223 19:31:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:25.223 19:31:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:25.223 19:31:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:25.223 19:31:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:25.223 19:31:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:25.223 19:31:00 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:25.223 19:31:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:25.223 19:31:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:25.223 19:31:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:25.223 19:31:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:25.223 19:31:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:25.223 19:31:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:25.223 19:31:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:25.223 19:31:00 -- common/autotest_common.sh@1543 -- # continue 00:04:25.223 19:31:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:25.223 19:31:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.223 19:31:00 -- common/autotest_common.sh@10 -- # set +x 00:04:25.223 19:31:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:25.223 19:31:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.223 19:31:00 -- common/autotest_common.sh@10 -- # set +x 00:04:25.482 19:31:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:28.769 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.769 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.147 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.405 19:31:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:30.405 19:31:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.405 19:31:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.405 19:31:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:30.405 19:31:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:30.405 19:31:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.405 19:31:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:30.405 19:31:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:30.405 19:31:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:30.405 19:31:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:30.405 19:31:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.405 19:31:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.405 19:31:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.405 19:31:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.405 19:31:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:30.405 19:31:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.405 19:31:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:30.405 19:31:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:30.405 19:31:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:30.405 19:31:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:30.405 19:31:05 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:30.405 19:31:05 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.405 19:31:05 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:30.405 19:31:05 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:30.405 19:31:05 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:30.405 19:31:05 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:30.405 19:31:05 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1102686 00:04:30.405 19:31:05 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.405 19:31:05 -- common/autotest_common.sh@1585 -- # waitforlisten 1102686 00:04:30.405 19:31:05 -- common/autotest_common.sh@835 -- # '[' -z 1102686 ']' 00:04:30.405 19:31:05 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.405 19:31:05 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.405 19:31:05 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.405 19:31:05 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.405 19:31:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.406 [2024-11-26 19:31:05.705113] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:30.406 [2024-11-26 19:31:05.705197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102686 ] 00:04:30.664 [2024-11-26 19:31:05.778585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.664 [2024-11-26 19:31:05.820280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.924 19:31:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.924 19:31:06 -- common/autotest_common.sh@868 -- # return 0 00:04:30.924 19:31:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:30.924 19:31:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:30.924 19:31:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:34.271 nvme0n1 00:04:34.271 19:31:09 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:34.271 [2024-11-26 19:31:09.224055] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:34.271 request: 00:04:34.271 { 00:04:34.271 "nvme_ctrlr_name": "nvme0", 00:04:34.271 "password": "test", 00:04:34.271 "method": "bdev_nvme_opal_revert", 00:04:34.271 "req_id": 1 00:04:34.271 } 00:04:34.271 Got JSON-RPC error response 00:04:34.271 response: 00:04:34.271 { 00:04:34.271 "code": -32602, 00:04:34.271 "message": "Invalid parameters" 00:04:34.271 } 00:04:34.271 19:31:09 -- common/autotest_common.sh@1591 -- # true 00:04:34.271 19:31:09 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:34.271 19:31:09 -- common/autotest_common.sh@1595 -- # killprocess 1102686 00:04:34.271 19:31:09 -- common/autotest_common.sh@954 -- # '[' -z 1102686 ']' 00:04:34.271 19:31:09 -- common/autotest_common.sh@958 -- # kill -0 1102686 00:04:34.271 19:31:09 -- common/autotest_common.sh@959 -- # uname 00:04:34.271 19:31:09 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.271 19:31:09 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102686 00:04:34.271 19:31:09 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.271 19:31:09 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.271 19:31:09 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102686' 00:04:34.271 killing process with pid 1102686 00:04:34.271 19:31:09 -- common/autotest_common.sh@973 -- # kill 1102686 00:04:34.271 19:31:09 -- common/autotest_common.sh@978 -- # wait 1102686 00:04:36.219 19:31:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:36.219 19:31:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:36.219 19:31:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.219 19:31:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.219 19:31:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:36.219 19:31:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.219 19:31:11 -- common/autotest_common.sh@10 -- # set +x 00:04:36.219 19:31:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:36.219 19:31:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:36.219 19:31:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.219 19:31:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.219 19:31:11 -- common/autotest_common.sh@10 -- # set +x 00:04:36.479 ************************************ 00:04:36.479 START TEST env 00:04:36.479 ************************************ 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:36.479 * Looking for test storage... 00:04:36.479 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.479 19:31:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.479 19:31:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.479 19:31:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.479 19:31:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.479 19:31:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.479 19:31:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.479 19:31:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.479 19:31:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.479 19:31:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.479 19:31:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.479 19:31:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.479 19:31:11 env -- scripts/common.sh@344 -- # case "$op" in 00:04:36.479 19:31:11 env -- scripts/common.sh@345 -- # : 1 00:04:36.479 19:31:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.479 19:31:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.479 19:31:11 env -- scripts/common.sh@365 -- # decimal 1 00:04:36.479 19:31:11 env -- scripts/common.sh@353 -- # local d=1 00:04:36.479 19:31:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.479 19:31:11 env -- scripts/common.sh@355 -- # echo 1 00:04:36.479 19:31:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.479 19:31:11 env -- scripts/common.sh@366 -- # decimal 2 00:04:36.479 19:31:11 env -- scripts/common.sh@353 -- # local d=2 00:04:36.479 19:31:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.479 19:31:11 env -- scripts/common.sh@355 -- # echo 2 00:04:36.479 19:31:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.479 19:31:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.479 19:31:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.479 19:31:11 env -- scripts/common.sh@368 -- # return 0 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.479 --rc genhtml_branch_coverage=1 00:04:36.479 --rc genhtml_function_coverage=1 00:04:36.479 --rc genhtml_legend=1 00:04:36.479 --rc geninfo_all_blocks=1 00:04:36.479 --rc geninfo_unexecuted_blocks=1 00:04:36.479 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.479 ' 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.479 --rc genhtml_branch_coverage=1 00:04:36.479 --rc genhtml_function_coverage=1 00:04:36.479 --rc genhtml_legend=1 00:04:36.479 --rc geninfo_all_blocks=1 00:04:36.479 --rc geninfo_unexecuted_blocks=1 00:04:36.479 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.479 ' 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.479 --rc genhtml_branch_coverage=1 00:04:36.479 --rc genhtml_function_coverage=1 00:04:36.479 --rc genhtml_legend=1 00:04:36.479 --rc geninfo_all_blocks=1 00:04:36.479 --rc geninfo_unexecuted_blocks=1 00:04:36.479 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.479 ' 00:04:36.479 19:31:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.479 --rc genhtml_branch_coverage=1 00:04:36.479 --rc genhtml_function_coverage=1 00:04:36.479 --rc genhtml_legend=1 00:04:36.479 --rc geninfo_all_blocks=1 00:04:36.479 --rc geninfo_unexecuted_blocks=1 00:04:36.479 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.479 ' 00:04:36.480 19:31:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.480 19:31:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.480 19:31:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.480 19:31:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.480 ************************************ 00:04:36.480 START TEST env_memory 00:04:36.480 ************************************ 00:04:36.480 19:31:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.480 00:04:36.480 00:04:36.480 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.480 http://cunit.sourceforge.net/ 00:04:36.480 00:04:36.480 00:04:36.480 Suite: memory 00:04:36.740 Test: alloc and free memory map ...[2024-11-26 19:31:11.799320] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.740 passed 00:04:36.740 Test: mem map translation ...[2024-11-26 19:31:11.811603] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.740 [2024-11-26 19:31:11.811620] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.740 [2024-11-26 19:31:11.811649] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.740 [2024-11-26 19:31:11.811657] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.740 passed 00:04:36.740 Test: mem map registration ...[2024-11-26 19:31:11.831323] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:36.740 [2024-11-26 19:31:11.831339] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:36.740 passed 00:04:36.740 Test: mem map adjacent registrations ...passed 00:04:36.740 00:04:36.740 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.740 suites 1 1 n/a 0 0 00:04:36.740 tests 4 4 4 0 0 00:04:36.740 asserts 152 152 152 0 n/a 00:04:36.740 00:04:36.740 Elapsed time = 0.080 seconds 00:04:36.740 00:04:36.740 real 0m0.093s 00:04:36.740 user 0m0.080s 00:04:36.740 sys 0m0.013s 00:04:36.740 19:31:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.740 19:31:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.740 ************************************ 00:04:36.740 END TEST env_memory 00:04:36.740 ************************************ 00:04:36.740 19:31:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.740 19:31:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.740 19:31:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.740 19:31:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.740 ************************************ 00:04:36.740 START TEST env_vtophys 00:04:36.740 ************************************ 00:04:36.740 19:31:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.740 EAL: lib.eal log level changed from notice to debug 00:04:36.740 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.740 EAL: Detected lcore 1 as core 1 on socket 0 00:04:36.740 EAL: Detected lcore 2 as core 2 on socket 0 00:04:36.740 EAL: Detected lcore 3 as core 3 on socket 0 00:04:36.740 EAL: Detected lcore 4 as core 4 on socket 0 00:04:36.740 EAL: Detected lcore 5 as core 5 on socket 0 00:04:36.740 EAL: Detected lcore 6 as core 6 on socket 0 00:04:36.740 EAL: Detected lcore 7 as core 8 on socket 0 00:04:36.740 EAL: Detected lcore 8 as core 9 on socket 0 00:04:36.740 EAL: Detected lcore 9 as core 10 on socket 0 00:04:36.740 EAL: Detected lcore 10 as core 11 on socket 0 00:04:36.740 EAL: Detected lcore 11 as core 12 on socket 0 00:04:36.740 EAL: Detected lcore 12 as core 13 on socket 0 00:04:36.740 EAL: Detected lcore 13 as core 14 on socket 0 00:04:36.740 EAL: Detected lcore 14 as core 16 on socket 0 00:04:36.740 EAL: Detected lcore 15 as core 17 on socket 0 00:04:36.740 EAL: Detected lcore 16 as core 18 on socket 0 00:04:36.740 EAL: Detected lcore 17 as core 19 on socket 0 00:04:36.740 EAL: Detected lcore 18 as core 20 on socket 0 00:04:36.740 EAL: Detected lcore 19 as core 21 on socket 0 00:04:36.740 EAL: Detected lcore 20 as core 22 on socket 0 00:04:36.740 EAL: Detected lcore 21 as core 24 on socket 0 00:04:36.740 EAL: Detected lcore 22 as core 25 on socket 0 00:04:36.740 EAL: Detected lcore 23 as core 26 on socket 0 00:04:36.740 EAL: Detected lcore 24 as core 27 on socket 0 00:04:36.740 EAL: Detected lcore 25 as core 28 on socket 0 00:04:36.740 EAL: Detected lcore 26 as core 29 on socket 0 00:04:36.740 EAL: Detected lcore 27 as core 30 on socket 0 00:04:36.740 EAL: Detected lcore 28 as core 0 on socket 1 00:04:36.740 EAL: Detected lcore 29 as core 1 on socket 1 00:04:36.740 EAL: Detected lcore 30 as core 2 on socket 1 00:04:36.740 EAL: Detected lcore 31 as core 3 on socket 1 00:04:36.740 EAL: Detected lcore 32 as core 4 on socket 1 00:04:36.740 EAL: Detected lcore 33 as core 5 on socket 1 00:04:36.740 EAL: Detected lcore 34 as core 6 on socket 1 00:04:36.740 EAL: Detected lcore 35 as core 8 on socket 1 00:04:36.740 EAL: Detected lcore 36 as core 9 on socket 1 00:04:36.740 EAL: Detected lcore 37 as core 10 on socket 1 00:04:36.740 EAL: Detected lcore 38 as core 11 on socket 1 00:04:36.740 EAL: Detected lcore 39 as core 12 on socket 1 00:04:36.740 EAL: Detected lcore 40 as core 13 on socket 1 00:04:36.740 EAL: Detected lcore 41 as core 14 on socket 1 00:04:36.740 EAL: Detected lcore 42 as core 16 on socket 1 00:04:36.740 EAL: Detected lcore 43 as core 17 on socket 1 00:04:36.740 EAL: Detected lcore 44 as core 18 on socket 1 00:04:36.740 EAL: Detected lcore 45 as core 19 on socket 1 00:04:36.740 EAL: Detected lcore 46 as core 20 on socket 1 00:04:36.740 EAL: Detected lcore 47 as core 21 on socket 1 00:04:36.740 EAL: Detected lcore 48 as core 22 on socket 1 00:04:36.740 EAL: Detected lcore 49 as core 24 on socket 1 00:04:36.740 EAL: Detected lcore 50 as core 25 on socket 1 00:04:36.740 EAL: Detected lcore 51 as core 26 on socket 1 00:04:36.740 EAL: Detected lcore 52 as core 27 on socket 1 00:04:36.740 EAL: Detected lcore 53 as core 28 on socket 1 00:04:36.740 EAL: Detected lcore 54 as core 29 on socket 1 00:04:36.740 EAL: Detected lcore 55 as core 30 on socket 1 00:04:36.740 EAL: Detected lcore 56 as core 0 on socket 0 00:04:36.740 EAL: Detected lcore 57 as core 1 on socket 0 00:04:36.740 EAL: Detected lcore 58 as core 2 on socket 0 00:04:36.740 EAL: Detected lcore 59 as core 3 on socket 0 00:04:36.740 EAL: Detected lcore 60 as core 4 on socket 0 00:04:36.740 EAL: Detected lcore 61 as core 5 on socket 0 00:04:36.740 EAL: Detected lcore 62 as core 6 on socket 0 00:04:36.740 EAL: Detected lcore 63 as core 8 on socket 0 00:04:36.740 EAL: Detected lcore 64 as core 9 on socket 0 00:04:36.740 EAL: Detected lcore 65 as core 10 on socket 0 00:04:36.740 EAL: Detected lcore 66 as core 11 on socket 0 00:04:36.740 EAL: Detected lcore 67 as core 12 on socket 0 00:04:36.740 EAL: Detected lcore 68 as core 13 on socket 0 00:04:36.740 EAL: Detected lcore 69 as core 14 on socket 0 00:04:36.740 EAL: Detected lcore 70 as core 16 on socket 0 00:04:36.740 EAL: Detected lcore 71 as core 17 on socket 0 00:04:36.740 EAL: Detected lcore 72 as core 18 on socket 0 00:04:36.740 EAL: Detected lcore 73 as core 19 on socket 0 00:04:36.740 EAL: Detected lcore 74 as core 20 on socket 0 00:04:36.741 EAL: Detected lcore 75 as core 21 on socket 0 00:04:36.741 EAL: Detected lcore 76 as core 22 on socket 0 00:04:36.741 EAL: Detected lcore 77 as core 24 on socket 0 00:04:36.741 EAL: Detected lcore 78 as core 25 on socket 0 00:04:36.741 EAL: Detected lcore 79 as core 26 on socket 0 00:04:36.741 EAL: Detected lcore 80 as core 27 on socket 0 00:04:36.741 EAL: Detected lcore 81 as core 28 on socket 0 00:04:36.741 EAL: Detected lcore 82 as core 29 on socket 0 00:04:36.741 EAL: Detected lcore 83 as core 30 on socket 0 00:04:36.741 EAL: Detected lcore 84 as core 0 on socket 1 00:04:36.741 EAL: Detected lcore 85 as core 1 on socket 1 00:04:36.741 EAL: Detected lcore 86 as core 2 on socket 1 00:04:36.741 EAL: Detected lcore 87 as core 3 on socket 1 00:04:36.741 EAL: Detected lcore 88 as core 4 on socket 1 00:04:36.741 EAL: Detected lcore 89 as core 5 on socket 1 00:04:36.741 EAL: Detected lcore 90 as core 6 on socket 1 00:04:36.741 EAL: Detected lcore 91 as core 8 on socket 1 00:04:36.741 EAL: Detected lcore 92 as core 9 on socket 1 00:04:36.741 EAL: Detected lcore 93 as core 10 on socket 1 00:04:36.741 EAL: Detected lcore 94 as core 11 on socket 1 00:04:36.741 EAL: Detected lcore 95 as core 12 on socket 1 00:04:36.741 EAL: Detected lcore 96 as core 13 on socket 1 00:04:36.741 EAL: Detected lcore 97 as core 14 on socket 1 00:04:36.741 EAL: Detected lcore 98 as core 16 on socket 1 00:04:36.741 EAL: Detected lcore 99 as core 17 on socket 1 00:04:36.741 EAL: Detected lcore 100 as core 18 on socket 1 00:04:36.741 EAL: Detected lcore 101 as core 19 on socket 1 00:04:36.741 EAL: Detected lcore 102 as core 20 on socket 1 00:04:36.741 EAL: Detected lcore 103 as core 21 on socket 1 00:04:36.741 EAL: Detected lcore 104 as core 22 on socket 1 00:04:36.741 EAL: Detected lcore 105 as core 24 on socket 1 00:04:36.741 EAL: Detected lcore 106 as core 25 on socket 1 00:04:36.741 EAL: Detected lcore 107 as core 26 on socket 1 00:04:36.741 EAL: Detected lcore 108 as core 27 on socket 1 00:04:36.741 EAL: Detected lcore 109 as core 28 on socket 1 00:04:36.741 EAL: Detected lcore 110 as core 29 on socket 1 00:04:36.741 EAL: Detected lcore 111 as core 30 on socket 1 00:04:36.741 EAL: Maximum logical cores by configuration: 128 00:04:36.741 EAL: Detected CPU lcores: 112 00:04:36.741 EAL: Detected NUMA nodes: 2 00:04:36.741 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.741 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:36.741 EAL: Checking presence of .so 'librte_eal.so' 00:04:36.741 EAL: Detected static linkage of DPDK 00:04:36.741 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.741 EAL: Bus pci wants IOVA as 'DC' 00:04:36.741 EAL: Buses did not request a specific IOVA mode. 00:04:36.741 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:36.741 EAL: Selected IOVA mode 'VA' 00:04:36.741 EAL: Probing VFIO support... 00:04:36.741 EAL: IOMMU type 1 (Type 1) is supported 00:04:36.741 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:36.741 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:36.741 EAL: VFIO support initialized 00:04:36.741 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.741 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.741 EAL: Setting up physically contiguous memory... 00:04:36.741 EAL: Setting maximum number of open files to 524288 00:04:36.741 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.741 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:36.741 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.741 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:36.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.741 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:36.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.741 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:36.741 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:36.741 EAL: Hugepages will be freed exactly as allocated. 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: TSC frequency is ~2500000 KHz 00:04:36.741 EAL: Main lcore 0 is ready (tid=7fd707152a00;cpuset=[0]) 00:04:36.741 EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.741 EAL: Restoring previous memory policy: 0 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.741 00:04:36.741 00:04:36.741 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.741 http://cunit.sourceforge.net/ 00:04:36.741 00:04:36.741 00:04:36.741 Suite: components_suite 00:04:36.741 Test: vtophys_malloc_test ...passed 00:04:36.741 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.741 EAL: Restoring previous memory policy: 4 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.741 EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.741 EAL: Restoring previous memory policy: 4 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.741 EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.741 EAL: Restoring previous memory policy: 4 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.741 EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.741 EAL: Restoring previous memory policy: 4 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.741 EAL: request: mp_malloc_sync 00:04:36.741 EAL: No shared files mode enabled, IPC is disabled 00:04:36.741 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.741 EAL: Trying to obtain current memory policy. 00:04:36.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.014 EAL: Restoring previous memory policy: 4 00:04:37.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.014 EAL: request: mp_malloc_sync 00:04:37.014 EAL: No shared files mode enabled, IPC is disabled 00:04:37.014 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.014 EAL: request: mp_malloc_sync 00:04:37.014 EAL: No shared files mode enabled, IPC is disabled 00:04:37.014 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.014 EAL: Trying to obtain current memory policy. 00:04:37.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.014 EAL: Restoring previous memory policy: 4 00:04:37.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.014 EAL: request: mp_malloc_sync 00:04:37.014 EAL: No shared files mode enabled, IPC is disabled 00:04:37.014 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.014 EAL: request: mp_malloc_sync 00:04:37.014 EAL: No shared files mode enabled, IPC is disabled 00:04:37.014 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.014 EAL: Trying to obtain current memory policy. 00:04:37.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.015 EAL: Restoring previous memory policy: 4 00:04:37.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.015 EAL: request: mp_malloc_sync 00:04:37.015 EAL: No shared files mode enabled, IPC is disabled 00:04:37.015 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.015 EAL: request: mp_malloc_sync 00:04:37.015 EAL: No shared files mode enabled, IPC is disabled 00:04:37.015 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.015 EAL: Trying to obtain current memory policy. 00:04:37.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.015 EAL: Restoring previous memory policy: 4 00:04:37.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.015 EAL: request: mp_malloc_sync 00:04:37.015 EAL: No shared files mode enabled, IPC is disabled 00:04:37.015 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.015 EAL: request: mp_malloc_sync 00:04:37.015 EAL: No shared files mode enabled, IPC is disabled 00:04:37.015 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.015 EAL: Trying to obtain current memory policy. 00:04:37.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.274 EAL: Restoring previous memory policy: 4 00:04:37.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.274 EAL: request: mp_malloc_sync 00:04:37.274 EAL: No shared files mode enabled, IPC is disabled 00:04:37.274 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.274 EAL: request: mp_malloc_sync 00:04:37.274 EAL: No shared files mode enabled, IPC is disabled 00:04:37.274 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.274 EAL: Trying to obtain current memory policy. 00:04:37.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.532 EAL: Restoring previous memory policy: 4 00:04:37.532 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.532 EAL: request: mp_malloc_sync 00:04:37.532 EAL: No shared files mode enabled, IPC is disabled 00:04:37.532 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.791 EAL: request: mp_malloc_sync 00:04:37.791 EAL: No shared files mode enabled, IPC is disabled 00:04:37.791 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.791 passed 00:04:37.791 00:04:37.791 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.791 suites 1 1 n/a 0 0 00:04:37.791 tests 2 2 2 0 0 00:04:37.791 asserts 497 497 497 0 n/a 00:04:37.791 00:04:37.791 Elapsed time = 0.963 seconds 00:04:37.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.791 EAL: request: mp_malloc_sync 00:04:37.791 EAL: No shared files mode enabled, IPC is disabled 00:04:37.791 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.791 EAL: No shared files mode enabled, IPC is disabled 00:04:37.791 EAL: No shared files mode enabled, IPC is disabled 00:04:37.791 EAL: No shared files mode enabled, IPC is disabled 00:04:37.791 00:04:37.791 real 0m1.086s 00:04:37.791 user 0m0.637s 00:04:37.791 sys 0m0.425s 00:04:37.791 19:31:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.791 19:31:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.791 ************************************ 00:04:37.791 END TEST env_vtophys 00:04:37.791 ************************************ 00:04:37.791 19:31:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.791 19:31:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.791 19:31:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.791 19:31:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 START TEST env_pci 00:04:38.050 ************************************ 00:04:38.050 19:31:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:38.050 00:04:38.050 00:04:38.050 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.050 http://cunit.sourceforge.net/ 00:04:38.050 00:04:38.050 00:04:38.050 Suite: pci 00:04:38.050 Test: pci_hook ...[2024-11-26 19:31:13.126315] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1103986 has claimed it 00:04:38.050 EAL: Cannot find device (10000:00:01.0) 00:04:38.050 EAL: Failed to attach device on primary process 00:04:38.050 passed 00:04:38.050 00:04:38.050 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.050 suites 1 1 n/a 0 0 00:04:38.050 tests 1 1 1 0 0 00:04:38.050 asserts 25 25 25 0 n/a 00:04:38.050 00:04:38.050 Elapsed time = 0.035 seconds 00:04:38.050 00:04:38.050 real 0m0.055s 00:04:38.050 user 0m0.014s 00:04:38.050 sys 0m0.041s 00:04:38.050 19:31:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.050 19:31:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 END TEST env_pci 00:04:38.050 ************************************ 00:04:38.050 19:31:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.050 19:31:13 env -- env/env.sh@15 -- # uname 00:04:38.050 19:31:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.050 19:31:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.050 19:31:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.050 19:31:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.050 19:31:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.050 19:31:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 START TEST env_dpdk_post_init 00:04:38.050 ************************************ 00:04:38.050 19:31:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.050 EAL: Detected CPU lcores: 112 00:04:38.050 EAL: Detected NUMA nodes: 2 00:04:38.050 EAL: Detected static linkage of DPDK 00:04:38.050 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.050 EAL: Selected IOVA mode 'VA' 00:04:38.050 EAL: VFIO support initialized 00:04:38.050 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.308 EAL: Using IOMMU type 1 (Type 1) 00:04:38.874 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:43.063 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:43.063 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:43.063 Starting DPDK initialization... 00:04:43.063 Starting SPDK post initialization... 00:04:43.063 SPDK NVMe probe 00:04:43.063 Attaching to 0000:d8:00.0 00:04:43.063 Attached to 0000:d8:00.0 00:04:43.063 Cleaning up... 00:04:43.063 00:04:43.063 real 0m4.666s 00:04:43.063 user 0m3.317s 00:04:43.063 sys 0m0.593s 00:04:43.063 19:31:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.063 19:31:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 ************************************ 00:04:43.063 END TEST env_dpdk_post_init 00:04:43.063 ************************************ 00:04:43.063 19:31:17 env -- env/env.sh@26 -- # uname 00:04:43.063 19:31:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.063 19:31:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.063 19:31:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.063 19:31:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.063 19:31:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 ************************************ 00:04:43.063 START TEST env_mem_callbacks 00:04:43.063 ************************************ 00:04:43.063 19:31:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.063 EAL: Detected CPU lcores: 112 00:04:43.063 EAL: Detected NUMA nodes: 2 00:04:43.063 EAL: Detected static linkage of DPDK 00:04:43.063 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.063 EAL: Selected IOVA mode 'VA' 00:04:43.063 EAL: VFIO support initialized 00:04:43.063 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.063 00:04:43.063 00:04:43.063 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.063 http://cunit.sourceforge.net/ 00:04:43.063 00:04:43.063 00:04:43.063 Suite: memory 00:04:43.063 Test: test ... 00:04:43.063 register 0x200000200000 2097152 00:04:43.063 malloc 3145728 00:04:43.063 register 0x200000400000 4194304 00:04:43.063 buf 0x200000500000 len 3145728 PASSED 00:04:43.063 malloc 64 00:04:43.063 buf 0x2000004fff40 len 64 PASSED 00:04:43.063 malloc 4194304 00:04:43.063 register 0x200000800000 6291456 00:04:43.063 buf 0x200000a00000 len 4194304 PASSED 00:04:43.063 free 0x200000500000 3145728 00:04:43.063 free 0x2000004fff40 64 00:04:43.063 unregister 0x200000400000 4194304 PASSED 00:04:43.063 free 0x200000a00000 4194304 00:04:43.063 unregister 0x200000800000 6291456 PASSED 00:04:43.063 malloc 8388608 00:04:43.063 register 0x200000400000 10485760 00:04:43.063 buf 0x200000600000 len 8388608 PASSED 00:04:43.063 free 0x200000600000 8388608 00:04:43.063 unregister 0x200000400000 10485760 PASSED 00:04:43.063 passed 00:04:43.063 00:04:43.063 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.063 suites 1 1 n/a 0 0 00:04:43.063 tests 1 1 1 0 0 00:04:43.063 asserts 15 15 15 0 n/a 00:04:43.063 00:04:43.063 Elapsed time = 0.005 seconds 00:04:43.063 00:04:43.063 real 0m0.064s 00:04:43.063 user 0m0.013s 00:04:43.063 sys 0m0.051s 00:04:43.063 19:31:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.063 19:31:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 ************************************ 00:04:43.063 END TEST env_mem_callbacks 00:04:43.063 ************************************ 00:04:43.063 00:04:43.063 real 0m6.591s 00:04:43.063 user 0m4.323s 00:04:43.063 sys 0m1.535s 00:04:43.063 19:31:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.063 19:31:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 ************************************ 00:04:43.063 END TEST env 00:04:43.063 ************************************ 00:04:43.063 19:31:18 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:43.063 19:31:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.063 19:31:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.063 19:31:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 ************************************ 00:04:43.063 START TEST rpc 00:04:43.063 ************************************ 00:04:43.063 19:31:18 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:43.063 * Looking for test storage... 00:04:43.063 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.063 19:31:18 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.063 19:31:18 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.063 19:31:18 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.063 19:31:18 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.063 19:31:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.063 19:31:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.063 19:31:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.063 19:31:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.063 19:31:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.063 19:31:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.063 19:31:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.063 19:31:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.063 19:31:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.063 19:31:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.063 19:31:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.063 19:31:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.063 19:31:18 rpc -- scripts/common.sh@345 -- # : 1 00:04:43.063 19:31:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.063 19:31:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.322 19:31:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.322 19:31:18 rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.322 19:31:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.322 19:31:18 rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.322 19:31:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.322 19:31:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.322 19:31:18 rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.322 19:31:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.322 19:31:18 rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.322 19:31:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.322 19:31:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.322 19:31:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.322 19:31:18 rpc -- scripts/common.sh@368 -- # return 0 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.322 --rc genhtml_branch_coverage=1 00:04:43.322 --rc genhtml_function_coverage=1 00:04:43.322 --rc genhtml_legend=1 00:04:43.322 --rc geninfo_all_blocks=1 00:04:43.322 --rc geninfo_unexecuted_blocks=1 00:04:43.322 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.322 ' 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.322 --rc genhtml_branch_coverage=1 00:04:43.322 --rc genhtml_function_coverage=1 00:04:43.322 --rc genhtml_legend=1 00:04:43.322 --rc geninfo_all_blocks=1 00:04:43.322 --rc geninfo_unexecuted_blocks=1 00:04:43.322 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.322 ' 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.322 --rc genhtml_branch_coverage=1 00:04:43.322 --rc genhtml_function_coverage=1 00:04:43.322 --rc genhtml_legend=1 00:04:43.322 --rc geninfo_all_blocks=1 00:04:43.322 --rc geninfo_unexecuted_blocks=1 00:04:43.322 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.322 ' 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.322 --rc genhtml_branch_coverage=1 00:04:43.322 --rc genhtml_function_coverage=1 00:04:43.322 --rc genhtml_legend=1 00:04:43.322 --rc geninfo_all_blocks=1 00:04:43.322 --rc geninfo_unexecuted_blocks=1 00:04:43.322 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.322 ' 00:04:43.322 19:31:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1105158 00:04:43.322 19:31:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.322 19:31:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:43.322 19:31:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1105158 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 1105158 ']' 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.322 19:31:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.322 [2024-11-26 19:31:18.409928] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:43.322 [2024-11-26 19:31:18.409990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105158 ] 00:04:43.322 [2024-11-26 19:31:18.479511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.322 [2024-11-26 19:31:18.521172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.322 [2024-11-26 19:31:18.521209] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1105158' to capture a snapshot of events at runtime. 00:04:43.322 [2024-11-26 19:31:18.521218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.322 [2024-11-26 19:31:18.521227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.322 [2024-11-26 19:31:18.521234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1105158 for offline analysis/debug. 00:04:43.322 [2024-11-26 19:31:18.521810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.581 19:31:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.581 19:31:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.581 19:31:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.581 19:31:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.581 19:31:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.581 19:31:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.581 19:31:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.581 19:31:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.581 19:31:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 ************************************ 00:04:43.581 START TEST rpc_integrity 00:04:43.581 ************************************ 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.581 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.581 { 00:04:43.581 "name": "Malloc0", 00:04:43.581 "aliases": [ 00:04:43.581 "4b325ad2-08f9-4ede-8a11-cb9e2d667fe7" 00:04:43.581 ], 00:04:43.581 "product_name": "Malloc disk", 00:04:43.581 "block_size": 512, 00:04:43.581 "num_blocks": 16384, 00:04:43.581 "uuid": "4b325ad2-08f9-4ede-8a11-cb9e2d667fe7", 00:04:43.581 "assigned_rate_limits": { 00:04:43.581 "rw_ios_per_sec": 0, 00:04:43.581 "rw_mbytes_per_sec": 0, 00:04:43.581 "r_mbytes_per_sec": 0, 00:04:43.581 "w_mbytes_per_sec": 0 00:04:43.581 }, 00:04:43.581 "claimed": false, 00:04:43.581 "zoned": false, 00:04:43.581 "supported_io_types": { 00:04:43.581 "read": true, 00:04:43.581 "write": true, 00:04:43.581 "unmap": true, 00:04:43.581 "flush": true, 00:04:43.581 "reset": true, 00:04:43.581 "nvme_admin": false, 00:04:43.581 "nvme_io": false, 00:04:43.581 "nvme_io_md": false, 00:04:43.581 "write_zeroes": true, 00:04:43.581 "zcopy": true, 00:04:43.581 "get_zone_info": false, 00:04:43.581 "zone_management": false, 00:04:43.581 "zone_append": false, 00:04:43.581 "compare": false, 00:04:43.581 "compare_and_write": false, 00:04:43.581 "abort": true, 00:04:43.581 "seek_hole": false, 00:04:43.581 "seek_data": false, 00:04:43.581 "copy": true, 00:04:43.581 "nvme_iov_md": false 00:04:43.581 }, 00:04:43.581 "memory_domains": [ 00:04:43.581 { 00:04:43.581 "dma_device_id": "system", 00:04:43.581 "dma_device_type": 1 00:04:43.581 }, 00:04:43.581 { 00:04:43.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.581 "dma_device_type": 2 00:04:43.581 } 00:04:43.581 ], 00:04:43.582 "driver_specific": {} 00:04:43.582 } 00:04:43.582 ]' 00:04:43.582 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.840 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.840 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.840 [2024-11-26 19:31:18.896868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.840 [2024-11-26 19:31:18.896901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.840 [2024-11-26 19:31:18.896920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x512e280 00:04:43.840 [2024-11-26 19:31:18.896930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.840 [2024-11-26 19:31:18.897835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.840 [2024-11-26 19:31:18.897857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.840 Passthru0 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.840 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.840 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.840 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.841 { 00:04:43.841 "name": "Malloc0", 00:04:43.841 "aliases": [ 00:04:43.841 "4b325ad2-08f9-4ede-8a11-cb9e2d667fe7" 00:04:43.841 ], 00:04:43.841 "product_name": "Malloc disk", 00:04:43.841 "block_size": 512, 00:04:43.841 "num_blocks": 16384, 00:04:43.841 "uuid": "4b325ad2-08f9-4ede-8a11-cb9e2d667fe7", 00:04:43.841 "assigned_rate_limits": { 00:04:43.841 "rw_ios_per_sec": 0, 00:04:43.841 "rw_mbytes_per_sec": 0, 00:04:43.841 "r_mbytes_per_sec": 0, 00:04:43.841 "w_mbytes_per_sec": 0 00:04:43.841 }, 00:04:43.841 "claimed": true, 00:04:43.841 "claim_type": "exclusive_write", 00:04:43.841 "zoned": false, 00:04:43.841 "supported_io_types": { 00:04:43.841 "read": true, 00:04:43.841 "write": true, 00:04:43.841 "unmap": true, 00:04:43.841 "flush": true, 00:04:43.841 "reset": true, 00:04:43.841 "nvme_admin": false, 00:04:43.841 "nvme_io": false, 00:04:43.841 "nvme_io_md": false, 00:04:43.841 "write_zeroes": true, 00:04:43.841 "zcopy": true, 00:04:43.841 "get_zone_info": false, 00:04:43.841 "zone_management": false, 00:04:43.841 "zone_append": false, 00:04:43.841 "compare": false, 00:04:43.841 "compare_and_write": false, 00:04:43.841 "abort": true, 00:04:43.841 "seek_hole": false, 00:04:43.841 "seek_data": false, 00:04:43.841 "copy": true, 00:04:43.841 "nvme_iov_md": false 00:04:43.841 }, 00:04:43.841 "memory_domains": [ 00:04:43.841 { 00:04:43.841 "dma_device_id": "system", 00:04:43.841 "dma_device_type": 1 00:04:43.841 }, 00:04:43.841 { 00:04:43.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.841 "dma_device_type": 2 00:04:43.841 } 00:04:43.841 ], 00:04:43.841 "driver_specific": {} 00:04:43.841 }, 00:04:43.841 { 00:04:43.841 "name": "Passthru0", 00:04:43.841 "aliases": [ 00:04:43.841 "edb771a7-8504-556a-a088-9f87220ddd7a" 00:04:43.841 ], 00:04:43.841 "product_name": "passthru", 00:04:43.841 "block_size": 512, 00:04:43.841 "num_blocks": 16384, 00:04:43.841 "uuid": "edb771a7-8504-556a-a088-9f87220ddd7a", 00:04:43.841 "assigned_rate_limits": { 00:04:43.841 "rw_ios_per_sec": 0, 00:04:43.841 "rw_mbytes_per_sec": 0, 00:04:43.841 "r_mbytes_per_sec": 0, 00:04:43.841 "w_mbytes_per_sec": 0 00:04:43.841 }, 00:04:43.841 "claimed": false, 00:04:43.841 "zoned": false, 00:04:43.841 "supported_io_types": { 00:04:43.841 "read": true, 00:04:43.841 "write": true, 00:04:43.841 "unmap": true, 00:04:43.841 "flush": true, 00:04:43.841 "reset": true, 00:04:43.841 "nvme_admin": false, 00:04:43.841 "nvme_io": false, 00:04:43.841 "nvme_io_md": false, 00:04:43.841 "write_zeroes": true, 00:04:43.841 "zcopy": true, 00:04:43.841 "get_zone_info": false, 00:04:43.841 "zone_management": false, 00:04:43.841 "zone_append": false, 00:04:43.841 "compare": false, 00:04:43.841 "compare_and_write": false, 00:04:43.841 "abort": true, 00:04:43.841 "seek_hole": false, 00:04:43.841 "seek_data": false, 00:04:43.841 "copy": true, 00:04:43.841 "nvme_iov_md": false 00:04:43.841 }, 00:04:43.841 "memory_domains": [ 00:04:43.841 { 00:04:43.841 "dma_device_id": "system", 00:04:43.841 "dma_device_type": 1 00:04:43.841 }, 00:04:43.841 { 00:04:43.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.841 "dma_device_type": 2 00:04:43.841 } 00:04:43.841 ], 00:04:43.841 "driver_specific": { 00:04:43.841 "passthru": { 00:04:43.841 "name": "Passthru0", 00:04:43.841 "base_bdev_name": "Malloc0" 00:04:43.841 } 00:04:43.841 } 00:04:43.841 } 00:04:43.841 ]' 00:04:43.841 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.841 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.841 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.841 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.841 19:31:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.841 19:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 19:31:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.841 19:31:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.841 19:31:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.841 19:31:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.841 00:04:43.841 real 0m0.284s 00:04:43.841 user 0m0.175s 00:04:43.841 sys 0m0.042s 00:04:43.841 19:31:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.841 19:31:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 END TEST rpc_integrity 00:04:43.841 ************************************ 00:04:43.841 19:31:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.841 19:31:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.841 19:31:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.841 19:31:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 START TEST rpc_plugins 00:04:43.841 ************************************ 00:04:43.841 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:43.841 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.841 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.841 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.841 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:44.109 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:44.109 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.109 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.109 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.109 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:44.109 { 00:04:44.109 "name": "Malloc1", 00:04:44.109 "aliases": [ 00:04:44.109 "ddf10ccd-4bbe-4e98-b3d9-3f149bf836b8" 00:04:44.110 ], 00:04:44.110 "product_name": "Malloc disk", 00:04:44.110 "block_size": 4096, 00:04:44.110 "num_blocks": 256, 00:04:44.110 "uuid": "ddf10ccd-4bbe-4e98-b3d9-3f149bf836b8", 00:04:44.110 "assigned_rate_limits": { 00:04:44.110 "rw_ios_per_sec": 0, 00:04:44.110 "rw_mbytes_per_sec": 0, 00:04:44.110 "r_mbytes_per_sec": 0, 00:04:44.110 "w_mbytes_per_sec": 0 00:04:44.110 }, 00:04:44.110 "claimed": false, 00:04:44.110 "zoned": false, 00:04:44.110 "supported_io_types": { 00:04:44.110 "read": true, 00:04:44.110 "write": true, 00:04:44.110 "unmap": true, 00:04:44.110 "flush": true, 00:04:44.110 "reset": true, 00:04:44.110 "nvme_admin": false, 00:04:44.110 "nvme_io": false, 00:04:44.110 "nvme_io_md": false, 00:04:44.110 "write_zeroes": true, 00:04:44.110 "zcopy": true, 00:04:44.110 "get_zone_info": false, 00:04:44.110 "zone_management": false, 00:04:44.110 "zone_append": false, 00:04:44.110 "compare": false, 00:04:44.110 "compare_and_write": false, 00:04:44.110 "abort": true, 00:04:44.110 "seek_hole": false, 00:04:44.110 "seek_data": false, 00:04:44.110 "copy": true, 00:04:44.110 "nvme_iov_md": false 00:04:44.110 }, 00:04:44.110 "memory_domains": [ 00:04:44.110 { 00:04:44.110 "dma_device_id": "system", 00:04:44.110 "dma_device_type": 1 00:04:44.110 }, 00:04:44.110 { 00:04:44.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.110 "dma_device_type": 2 00:04:44.110 } 00:04:44.110 ], 00:04:44.110 "driver_specific": {} 00:04:44.110 } 00:04:44.110 ]' 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.110 19:31:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.110 00:04:44.110 real 0m0.149s 00:04:44.110 user 0m0.088s 00:04:44.110 sys 0m0.026s 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.110 19:31:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 ************************************ 00:04:44.110 END TEST rpc_plugins 00:04:44.110 ************************************ 00:04:44.110 19:31:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.110 19:31:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.110 19:31:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.110 19:31:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 ************************************ 00:04:44.110 START TEST rpc_trace_cmd_test 00:04:44.110 ************************************ 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.110 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.110 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1105158", 00:04:44.110 "tpoint_group_mask": "0x8", 00:04:44.110 "iscsi_conn": { 00:04:44.110 "mask": "0x2", 00:04:44.110 "tpoint_mask": "0x0" 00:04:44.110 }, 00:04:44.111 "scsi": { 00:04:44.111 "mask": "0x4", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "bdev": { 00:04:44.111 "mask": "0x8", 00:04:44.111 "tpoint_mask": "0xffffffffffffffff" 00:04:44.111 }, 00:04:44.111 "nvmf_rdma": { 00:04:44.111 "mask": "0x10", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "nvmf_tcp": { 00:04:44.111 "mask": "0x20", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "ftl": { 00:04:44.111 "mask": "0x40", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "blobfs": { 00:04:44.111 "mask": "0x80", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "dsa": { 00:04:44.111 "mask": "0x200", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "thread": { 00:04:44.111 "mask": "0x400", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "nvme_pcie": { 00:04:44.111 "mask": "0x800", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "iaa": { 00:04:44.111 "mask": "0x1000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "nvme_tcp": { 00:04:44.111 "mask": "0x2000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "bdev_nvme": { 00:04:44.111 "mask": "0x4000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "sock": { 00:04:44.111 "mask": "0x8000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "blob": { 00:04:44.111 "mask": "0x10000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "bdev_raid": { 00:04:44.111 "mask": "0x20000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 }, 00:04:44.111 "scheduler": { 00:04:44.111 "mask": "0x40000", 00:04:44.111 "tpoint_mask": "0x0" 00:04:44.111 } 00:04:44.111 }' 00:04:44.111 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.371 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:44.371 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.371 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.372 00:04:44.372 real 0m0.236s 00:04:44.372 user 0m0.191s 00:04:44.372 sys 0m0.036s 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.372 19:31:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.372 ************************************ 00:04:44.372 END TEST rpc_trace_cmd_test 00:04:44.372 ************************************ 00:04:44.372 19:31:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.372 19:31:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.372 19:31:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.372 19:31:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.372 19:31:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.372 19:31:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.372 ************************************ 00:04:44.372 START TEST rpc_daemon_integrity 00:04:44.372 ************************************ 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.631 { 00:04:44.631 "name": "Malloc2", 00:04:44.631 "aliases": [ 00:04:44.631 "a6a7bf10-5283-4e15-88fc-8014161ba990" 00:04:44.631 ], 00:04:44.631 "product_name": "Malloc disk", 00:04:44.631 "block_size": 512, 00:04:44.631 "num_blocks": 16384, 00:04:44.631 "uuid": "a6a7bf10-5283-4e15-88fc-8014161ba990", 00:04:44.631 "assigned_rate_limits": { 00:04:44.631 "rw_ios_per_sec": 0, 00:04:44.631 "rw_mbytes_per_sec": 0, 00:04:44.631 "r_mbytes_per_sec": 0, 00:04:44.631 "w_mbytes_per_sec": 0 00:04:44.631 }, 00:04:44.631 "claimed": false, 00:04:44.631 "zoned": false, 00:04:44.631 "supported_io_types": { 00:04:44.631 "read": true, 00:04:44.631 "write": true, 00:04:44.631 "unmap": true, 00:04:44.631 "flush": true, 00:04:44.631 "reset": true, 00:04:44.631 "nvme_admin": false, 00:04:44.631 "nvme_io": false, 00:04:44.631 "nvme_io_md": false, 00:04:44.631 "write_zeroes": true, 00:04:44.631 "zcopy": true, 00:04:44.631 "get_zone_info": false, 00:04:44.631 "zone_management": false, 00:04:44.631 "zone_append": false, 00:04:44.631 "compare": false, 00:04:44.631 "compare_and_write": false, 00:04:44.631 "abort": true, 00:04:44.631 "seek_hole": false, 00:04:44.631 "seek_data": false, 00:04:44.631 "copy": true, 00:04:44.631 "nvme_iov_md": false 00:04:44.631 }, 00:04:44.631 "memory_domains": [ 00:04:44.631 { 00:04:44.631 "dma_device_id": "system", 00:04:44.631 "dma_device_type": 1 00:04:44.631 }, 00:04:44.631 { 00:04:44.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.631 "dma_device_type": 2 00:04:44.631 } 00:04:44.631 ], 00:04:44.631 "driver_specific": {} 00:04:44.631 } 00:04:44.631 ]' 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.631 [2024-11-26 19:31:19.823266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.631 [2024-11-26 19:31:19.823299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.631 [2024-11-26 19:31:19.823322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51238b0 00:04:44.631 [2024-11-26 19:31:19.823332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.631 [2024-11-26 19:31:19.824078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.631 [2024-11-26 19:31:19.824100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.631 Passthru0 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.631 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.631 { 00:04:44.631 "name": "Malloc2", 00:04:44.631 "aliases": [ 00:04:44.631 "a6a7bf10-5283-4e15-88fc-8014161ba990" 00:04:44.631 ], 00:04:44.631 "product_name": "Malloc disk", 00:04:44.631 "block_size": 512, 00:04:44.631 "num_blocks": 16384, 00:04:44.631 "uuid": "a6a7bf10-5283-4e15-88fc-8014161ba990", 00:04:44.631 "assigned_rate_limits": { 00:04:44.631 "rw_ios_per_sec": 0, 00:04:44.631 "rw_mbytes_per_sec": 0, 00:04:44.631 "r_mbytes_per_sec": 0, 00:04:44.631 "w_mbytes_per_sec": 0 00:04:44.631 }, 00:04:44.631 "claimed": true, 00:04:44.631 "claim_type": "exclusive_write", 00:04:44.631 "zoned": false, 00:04:44.631 "supported_io_types": { 00:04:44.631 "read": true, 00:04:44.631 "write": true, 00:04:44.631 "unmap": true, 00:04:44.631 "flush": true, 00:04:44.631 "reset": true, 00:04:44.631 "nvme_admin": false, 00:04:44.631 "nvme_io": false, 00:04:44.631 "nvme_io_md": false, 00:04:44.631 "write_zeroes": true, 00:04:44.631 "zcopy": true, 00:04:44.631 "get_zone_info": false, 00:04:44.631 "zone_management": false, 00:04:44.632 "zone_append": false, 00:04:44.632 "compare": false, 00:04:44.632 "compare_and_write": false, 00:04:44.632 "abort": true, 00:04:44.632 "seek_hole": false, 00:04:44.632 "seek_data": false, 00:04:44.632 "copy": true, 00:04:44.632 "nvme_iov_md": false 00:04:44.632 }, 00:04:44.632 "memory_domains": [ 00:04:44.632 { 00:04:44.632 "dma_device_id": "system", 00:04:44.632 "dma_device_type": 1 00:04:44.632 }, 00:04:44.632 { 00:04:44.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.632 "dma_device_type": 2 00:04:44.632 } 00:04:44.632 ], 00:04:44.632 "driver_specific": {} 00:04:44.632 }, 00:04:44.632 { 00:04:44.632 "name": "Passthru0", 00:04:44.632 "aliases": [ 00:04:44.632 "0a8f95df-4d43-5370-874f-6fafae2a6183" 00:04:44.632 ], 00:04:44.632 "product_name": "passthru", 00:04:44.632 "block_size": 512, 00:04:44.632 "num_blocks": 16384, 00:04:44.632 "uuid": "0a8f95df-4d43-5370-874f-6fafae2a6183", 00:04:44.632 "assigned_rate_limits": { 00:04:44.632 "rw_ios_per_sec": 0, 00:04:44.632 "rw_mbytes_per_sec": 0, 00:04:44.632 "r_mbytes_per_sec": 0, 00:04:44.632 "w_mbytes_per_sec": 0 00:04:44.632 }, 00:04:44.632 "claimed": false, 00:04:44.632 "zoned": false, 00:04:44.632 "supported_io_types": { 00:04:44.632 "read": true, 00:04:44.632 "write": true, 00:04:44.632 "unmap": true, 00:04:44.632 "flush": true, 00:04:44.632 "reset": true, 00:04:44.632 "nvme_admin": false, 00:04:44.632 "nvme_io": false, 00:04:44.632 "nvme_io_md": false, 00:04:44.632 "write_zeroes": true, 00:04:44.632 "zcopy": true, 00:04:44.632 "get_zone_info": false, 00:04:44.632 "zone_management": false, 00:04:44.632 "zone_append": false, 00:04:44.632 "compare": false, 00:04:44.632 "compare_and_write": false, 00:04:44.632 "abort": true, 00:04:44.632 "seek_hole": false, 00:04:44.632 "seek_data": false, 00:04:44.632 "copy": true, 00:04:44.632 "nvme_iov_md": false 00:04:44.632 }, 00:04:44.632 "memory_domains": [ 00:04:44.632 { 00:04:44.632 "dma_device_id": "system", 00:04:44.632 "dma_device_type": 1 00:04:44.632 }, 00:04:44.632 { 00:04:44.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.632 "dma_device_type": 2 00:04:44.632 } 00:04:44.632 ], 00:04:44.632 "driver_specific": { 00:04:44.632 "passthru": { 00:04:44.632 "name": "Passthru0", 00:04:44.632 "base_bdev_name": "Malloc2" 00:04:44.632 } 00:04:44.632 } 00:04:44.632 } 00:04:44.632 ]' 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.632 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.905 19:31:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.905 00:04:44.905 real 0m0.297s 00:04:44.905 user 0m0.190s 00:04:44.905 sys 0m0.050s 00:04:44.905 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.905 19:31:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.905 ************************************ 00:04:44.905 END TEST rpc_daemon_integrity 00:04:44.905 ************************************ 00:04:44.906 19:31:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.906 19:31:20 rpc -- rpc/rpc.sh@84 -- # killprocess 1105158 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 1105158 ']' 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@958 -- # kill -0 1105158 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105158 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105158' 00:04:44.906 killing process with pid 1105158 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@973 -- # kill 1105158 00:04:44.906 19:31:20 rpc -- common/autotest_common.sh@978 -- # wait 1105158 00:04:45.165 00:04:45.165 real 0m2.187s 00:04:45.165 user 0m2.802s 00:04:45.165 sys 0m0.792s 00:04:45.165 19:31:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.165 19:31:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.165 ************************************ 00:04:45.165 END TEST rpc 00:04:45.165 ************************************ 00:04:45.165 19:31:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.165 19:31:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.165 19:31:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.165 19:31:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.165 ************************************ 00:04:45.165 START TEST skip_rpc 00:04:45.165 ************************************ 00:04:45.165 19:31:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.425 * Looking for test storage... 00:04:45.425 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.425 19:31:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.425 --rc genhtml_branch_coverage=1 00:04:45.425 --rc genhtml_function_coverage=1 00:04:45.425 --rc genhtml_legend=1 00:04:45.425 --rc geninfo_all_blocks=1 00:04:45.425 --rc geninfo_unexecuted_blocks=1 00:04:45.425 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.425 ' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.425 --rc genhtml_branch_coverage=1 00:04:45.425 --rc genhtml_function_coverage=1 00:04:45.425 --rc genhtml_legend=1 00:04:45.425 --rc geninfo_all_blocks=1 00:04:45.425 --rc geninfo_unexecuted_blocks=1 00:04:45.425 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.425 ' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.425 --rc genhtml_branch_coverage=1 00:04:45.425 --rc genhtml_function_coverage=1 00:04:45.425 --rc genhtml_legend=1 00:04:45.425 --rc geninfo_all_blocks=1 00:04:45.425 --rc geninfo_unexecuted_blocks=1 00:04:45.425 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.425 ' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.425 --rc genhtml_branch_coverage=1 00:04:45.425 --rc genhtml_function_coverage=1 00:04:45.425 --rc genhtml_legend=1 00:04:45.425 --rc geninfo_all_blocks=1 00:04:45.425 --rc geninfo_unexecuted_blocks=1 00:04:45.425 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.425 ' 00:04:45.425 19:31:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:45.425 19:31:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:45.425 19:31:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.425 19:31:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.425 ************************************ 00:04:45.425 START TEST skip_rpc 00:04:45.425 ************************************ 00:04:45.425 19:31:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.425 19:31:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1105618 00:04:45.425 19:31:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.425 19:31:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.425 19:31:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.425 [2024-11-26 19:31:20.712408] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:45.425 [2024-11-26 19:31:20.712465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105618 ] 00:04:45.684 [2024-11-26 19:31:20.782481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.684 [2024-11-26 19:31:20.821410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:50.952 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1105618 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1105618 ']' 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1105618 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105618 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105618' 00:04:50.953 killing process with pid 1105618 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1105618 00:04:50.953 19:31:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1105618 00:04:50.953 00:04:50.953 real 0m5.373s 00:04:50.953 user 0m5.131s 00:04:50.953 sys 0m0.287s 00:04:50.953 19:31:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.953 19:31:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.953 ************************************ 00:04:50.953 END TEST skip_rpc 00:04:50.953 ************************************ 00:04:50.953 19:31:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.953 19:31:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.953 19:31:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.953 19:31:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.953 ************************************ 00:04:50.953 START TEST skip_rpc_with_json 00:04:50.953 ************************************ 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1106697 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1106697 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1106697 ']' 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.953 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.953 [2024-11-26 19:31:26.144438] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:50.953 [2024-11-26 19:31:26.144479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106697 ] 00:04:50.953 [2024-11-26 19:31:26.213281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.953 [2024-11-26 19:31:26.255967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.212 [2024-11-26 19:31:26.463981] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.212 request: 00:04:51.212 { 00:04:51.212 "trtype": "tcp", 00:04:51.212 "method": "nvmf_get_transports", 00:04:51.212 "req_id": 1 00:04:51.212 } 00:04:51.212 Got JSON-RPC error response 00:04:51.212 response: 00:04:51.212 { 00:04:51.212 "code": -19, 00:04:51.212 "message": "No such device" 00:04:51.212 } 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.212 [2024-11-26 19:31:26.472064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.212 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.471 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.471 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.471 { 00:04:51.471 "subsystems": [ 00:04:51.471 { 00:04:51.471 "subsystem": "scheduler", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "framework_set_scheduler", 00:04:51.471 "params": { 00:04:51.471 "name": "static" 00:04:51.471 } 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "vmd", 00:04:51.471 "config": [] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "sock", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "sock_set_default_impl", 00:04:51.471 "params": { 00:04:51.471 "impl_name": "posix" 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "sock_impl_set_options", 00:04:51.471 "params": { 00:04:51.471 "impl_name": "ssl", 00:04:51.471 "recv_buf_size": 4096, 00:04:51.471 "send_buf_size": 4096, 00:04:51.471 "enable_recv_pipe": true, 00:04:51.471 "enable_quickack": false, 00:04:51.471 "enable_placement_id": 0, 00:04:51.471 "enable_zerocopy_send_server": true, 00:04:51.471 "enable_zerocopy_send_client": false, 00:04:51.471 "zerocopy_threshold": 0, 00:04:51.471 "tls_version": 0, 00:04:51.471 "enable_ktls": false 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "sock_impl_set_options", 00:04:51.471 "params": { 00:04:51.471 "impl_name": "posix", 00:04:51.471 "recv_buf_size": 2097152, 00:04:51.471 "send_buf_size": 2097152, 00:04:51.471 "enable_recv_pipe": true, 00:04:51.471 "enable_quickack": false, 00:04:51.471 "enable_placement_id": 0, 00:04:51.471 "enable_zerocopy_send_server": true, 00:04:51.471 "enable_zerocopy_send_client": false, 00:04:51.471 "zerocopy_threshold": 0, 00:04:51.471 "tls_version": 0, 00:04:51.471 "enable_ktls": false 00:04:51.471 } 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "iobuf", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "iobuf_set_options", 00:04:51.471 "params": { 00:04:51.471 "small_pool_count": 8192, 00:04:51.471 "large_pool_count": 1024, 00:04:51.471 "small_bufsize": 8192, 00:04:51.471 "large_bufsize": 135168, 00:04:51.471 "enable_numa": false 00:04:51.471 } 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "keyring", 00:04:51.471 "config": [] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "vfio_user_target", 00:04:51.471 "config": null 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "fsdev", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "fsdev_set_opts", 00:04:51.471 "params": { 00:04:51.471 "fsdev_io_pool_size": 65535, 00:04:51.471 "fsdev_io_cache_size": 256 00:04:51.471 } 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "accel", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "accel_set_options", 00:04:51.471 "params": { 00:04:51.471 "small_cache_size": 128, 00:04:51.471 "large_cache_size": 16, 00:04:51.471 "task_count": 2048, 00:04:51.471 "sequence_count": 2048, 00:04:51.471 "buf_count": 2048 00:04:51.471 } 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "bdev", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "bdev_set_options", 00:04:51.471 "params": { 00:04:51.471 "bdev_io_pool_size": 65535, 00:04:51.471 "bdev_io_cache_size": 256, 00:04:51.471 "bdev_auto_examine": true, 00:04:51.471 "iobuf_small_cache_size": 128, 00:04:51.471 "iobuf_large_cache_size": 16 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "bdev_raid_set_options", 00:04:51.471 "params": { 00:04:51.471 "process_window_size_kb": 1024, 00:04:51.471 "process_max_bandwidth_mb_sec": 0 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "bdev_nvme_set_options", 00:04:51.471 "params": { 00:04:51.471 "action_on_timeout": "none", 00:04:51.471 "timeout_us": 0, 00:04:51.471 "timeout_admin_us": 0, 00:04:51.471 "keep_alive_timeout_ms": 10000, 00:04:51.471 "arbitration_burst": 0, 00:04:51.471 "low_priority_weight": 0, 00:04:51.471 "medium_priority_weight": 0, 00:04:51.471 "high_priority_weight": 0, 00:04:51.471 "nvme_adminq_poll_period_us": 10000, 00:04:51.471 "nvme_ioq_poll_period_us": 0, 00:04:51.471 "io_queue_requests": 0, 00:04:51.471 "delay_cmd_submit": true, 00:04:51.471 "transport_retry_count": 4, 00:04:51.471 "bdev_retry_count": 3, 00:04:51.471 "transport_ack_timeout": 0, 00:04:51.471 "ctrlr_loss_timeout_sec": 0, 00:04:51.471 "reconnect_delay_sec": 0, 00:04:51.471 "fast_io_fail_timeout_sec": 0, 00:04:51.471 "disable_auto_failback": false, 00:04:51.471 "generate_uuids": false, 00:04:51.471 "transport_tos": 0, 00:04:51.471 "nvme_error_stat": false, 00:04:51.471 "rdma_srq_size": 0, 00:04:51.471 "io_path_stat": false, 00:04:51.471 "allow_accel_sequence": false, 00:04:51.471 "rdma_max_cq_size": 0, 00:04:51.471 "rdma_cm_event_timeout_ms": 0, 00:04:51.471 "dhchap_digests": [ 00:04:51.471 "sha256", 00:04:51.471 "sha384", 00:04:51.471 "sha512" 00:04:51.471 ], 00:04:51.471 "dhchap_dhgroups": [ 00:04:51.471 "null", 00:04:51.471 "ffdhe2048", 00:04:51.471 "ffdhe3072", 00:04:51.471 "ffdhe4096", 00:04:51.471 "ffdhe6144", 00:04:51.471 "ffdhe8192" 00:04:51.471 ] 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "bdev_nvme_set_hotplug", 00:04:51.471 "params": { 00:04:51.471 "period_us": 100000, 00:04:51.471 "enable": false 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "bdev_iscsi_set_options", 00:04:51.471 "params": { 00:04:51.471 "timeout_sec": 30 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "bdev_wait_for_examine" 00:04:51.471 } 00:04:51.471 ] 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "subsystem": "nvmf", 00:04:51.471 "config": [ 00:04:51.471 { 00:04:51.471 "method": "nvmf_set_config", 00:04:51.471 "params": { 00:04:51.471 "discovery_filter": "match_any", 00:04:51.471 "admin_cmd_passthru": { 00:04:51.471 "identify_ctrlr": false 00:04:51.471 }, 00:04:51.471 "dhchap_digests": [ 00:04:51.471 "sha256", 00:04:51.471 "sha384", 00:04:51.471 "sha512" 00:04:51.471 ], 00:04:51.471 "dhchap_dhgroups": [ 00:04:51.471 "null", 00:04:51.471 "ffdhe2048", 00:04:51.471 "ffdhe3072", 00:04:51.471 "ffdhe4096", 00:04:51.471 "ffdhe6144", 00:04:51.471 "ffdhe8192" 00:04:51.471 ] 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "nvmf_set_max_subsystems", 00:04:51.471 "params": { 00:04:51.471 "max_subsystems": 1024 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "nvmf_set_crdt", 00:04:51.471 "params": { 00:04:51.471 "crdt1": 0, 00:04:51.471 "crdt2": 0, 00:04:51.471 "crdt3": 0 00:04:51.471 } 00:04:51.471 }, 00:04:51.471 { 00:04:51.471 "method": "nvmf_create_transport", 00:04:51.471 "params": { 00:04:51.471 "trtype": "TCP", 00:04:51.471 "max_queue_depth": 128, 00:04:51.471 "max_io_qpairs_per_ctrlr": 127, 00:04:51.471 "in_capsule_data_size": 4096, 00:04:51.471 "max_io_size": 131072, 00:04:51.471 "io_unit_size": 131072, 00:04:51.471 "max_aq_depth": 128, 00:04:51.472 "num_shared_buffers": 511, 00:04:51.472 "buf_cache_size": 4294967295, 00:04:51.472 "dif_insert_or_strip": false, 00:04:51.472 "zcopy": false, 00:04:51.472 "c2h_success": true, 00:04:51.472 "sock_priority": 0, 00:04:51.472 "abort_timeout_sec": 1, 00:04:51.472 "ack_timeout": 0, 00:04:51.472 "data_wr_pool_size": 0 00:04:51.472 } 00:04:51.472 } 00:04:51.472 ] 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "nbd", 00:04:51.472 "config": [] 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "ublk", 00:04:51.472 "config": [] 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "vhost_blk", 00:04:51.472 "config": [] 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "scsi", 00:04:51.472 "config": null 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "iscsi", 00:04:51.472 "config": [ 00:04:51.472 { 00:04:51.472 "method": "iscsi_set_options", 00:04:51.472 "params": { 00:04:51.472 "node_base": "iqn.2016-06.io.spdk", 00:04:51.472 "max_sessions": 128, 00:04:51.472 "max_connections_per_session": 2, 00:04:51.472 "max_queue_depth": 64, 00:04:51.472 "default_time2wait": 2, 00:04:51.472 "default_time2retain": 20, 00:04:51.472 "first_burst_length": 8192, 00:04:51.472 "immediate_data": true, 00:04:51.472 "allow_duplicated_isid": false, 00:04:51.472 "error_recovery_level": 0, 00:04:51.472 "nop_timeout": 60, 00:04:51.472 "nop_in_interval": 30, 00:04:51.472 "disable_chap": false, 00:04:51.472 "require_chap": false, 00:04:51.472 "mutual_chap": false, 00:04:51.472 "chap_group": 0, 00:04:51.472 "max_large_datain_per_connection": 64, 00:04:51.472 "max_r2t_per_connection": 4, 00:04:51.472 "pdu_pool_size": 36864, 00:04:51.472 "immediate_data_pool_size": 16384, 00:04:51.472 "data_out_pool_size": 2048 00:04:51.472 } 00:04:51.472 } 00:04:51.472 ] 00:04:51.472 }, 00:04:51.472 { 00:04:51.472 "subsystem": "vhost_scsi", 00:04:51.472 "config": [] 00:04:51.472 } 00:04:51.472 ] 00:04:51.472 } 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1106697 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1106697 ']' 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1106697 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106697 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106697' 00:04:51.472 killing process with pid 1106697 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1106697 00:04:51.472 19:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1106697 00:04:51.731 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1106721 00:04:51.731 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.731 19:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:56.999 19:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1106721 00:04:56.999 19:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1106721 ']' 00:04:56.999 19:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1106721 00:04:56.999 19:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106721 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106721' 00:04:56.999 killing process with pid 1106721 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1106721 00:04:56.999 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1106721 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.257 00:04:57.257 real 0m6.223s 00:04:57.257 user 0m5.921s 00:04:57.257 sys 0m0.602s 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.257 ************************************ 00:04:57.257 END TEST skip_rpc_with_json 00:04:57.257 ************************************ 00:04:57.257 19:31:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.257 ************************************ 00:04:57.257 START TEST skip_rpc_with_delay 00:04:57.257 ************************************ 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.257 [2024-11-26 19:31:32.468033] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.257 00:04:57.257 real 0m0.047s 00:04:57.257 user 0m0.021s 00:04:57.257 sys 0m0.026s 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.257 19:31:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.257 ************************************ 00:04:57.257 END TEST skip_rpc_with_delay 00:04:57.257 ************************************ 00:04:57.257 19:31:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.257 19:31:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.257 19:31:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.257 19:31:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.515 ************************************ 00:04:57.515 START TEST exit_on_failed_rpc_init 00:04:57.515 ************************************ 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1107828 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1107828 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1107828 ']' 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.515 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.515 [2024-11-26 19:31:32.599387] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:57.515 [2024-11-26 19:31:32.599451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1107828 ] 00:04:57.516 [2024-11-26 19:31:32.671502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.516 [2024-11-26 19:31:32.715981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.774 19:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.774 [2024-11-26 19:31:32.951957] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:57.774 [2024-11-26 19:31:32.952022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1107840 ] 00:04:57.774 [2024-11-26 19:31:33.021898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.774 [2024-11-26 19:31:33.061740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.774 [2024-11-26 19:31:33.061814] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.774 [2024-11-26 19:31:33.061827] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.774 [2024-11-26 19:31:33.061835] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1107828 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1107828 ']' 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1107828 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1107828 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.032 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.033 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1107828' 00:04:58.033 killing process with pid 1107828 00:04:58.033 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1107828 00:04:58.033 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1107828 00:04:58.291 00:04:58.291 real 0m0.875s 00:04:58.291 user 0m0.899s 00:04:58.291 sys 0m0.394s 00:04:58.291 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.291 19:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.291 ************************************ 00:04:58.291 END TEST exit_on_failed_rpc_init 00:04:58.291 ************************************ 00:04:58.291 19:31:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:58.291 00:04:58.291 real 0m13.041s 00:04:58.291 user 0m12.189s 00:04:58.291 sys 0m1.655s 00:04:58.291 19:31:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.291 19:31:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.291 ************************************ 00:04:58.291 END TEST skip_rpc 00:04:58.291 ************************************ 00:04:58.291 19:31:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.291 19:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.291 19:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.291 19:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.291 ************************************ 00:04:58.291 START TEST rpc_client 00:04:58.291 ************************************ 00:04:58.291 19:31:33 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.549 * Looking for test storage... 00:04:58.549 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:58.549 19:31:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.549 19:31:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.549 19:31:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.549 19:31:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.549 19:31:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.549 19:31:33 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.550 --rc genhtml_branch_coverage=1 00:04:58.550 --rc genhtml_function_coverage=1 00:04:58.550 --rc genhtml_legend=1 00:04:58.550 --rc geninfo_all_blocks=1 00:04:58.550 --rc geninfo_unexecuted_blocks=1 00:04:58.550 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.550 ' 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.550 --rc genhtml_branch_coverage=1 00:04:58.550 --rc genhtml_function_coverage=1 00:04:58.550 --rc genhtml_legend=1 00:04:58.550 --rc geninfo_all_blocks=1 00:04:58.550 --rc geninfo_unexecuted_blocks=1 00:04:58.550 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.550 ' 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.550 --rc genhtml_branch_coverage=1 00:04:58.550 --rc genhtml_function_coverage=1 00:04:58.550 --rc genhtml_legend=1 00:04:58.550 --rc geninfo_all_blocks=1 00:04:58.550 --rc geninfo_unexecuted_blocks=1 00:04:58.550 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.550 ' 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.550 --rc genhtml_branch_coverage=1 00:04:58.550 --rc genhtml_function_coverage=1 00:04:58.550 --rc genhtml_legend=1 00:04:58.550 --rc geninfo_all_blocks=1 00:04:58.550 --rc geninfo_unexecuted_blocks=1 00:04:58.550 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.550 ' 00:04:58.550 19:31:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:58.550 OK 00:04:58.550 19:31:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.550 00:04:58.550 real 0m0.210s 00:04:58.550 user 0m0.120s 00:04:58.550 sys 0m0.108s 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.550 19:31:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.550 ************************************ 00:04:58.550 END TEST rpc_client 00:04:58.550 ************************************ 00:04:58.550 19:31:33 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.550 19:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.550 19:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.550 19:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.810 ************************************ 00:04:58.810 START TEST json_config 00:04:58.810 ************************************ 00:04:58.810 19:31:33 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.810 19:31:33 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.810 19:31:33 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.810 19:31:33 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.810 19:31:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.810 19:31:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.810 19:31:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.810 19:31:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.810 19:31:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.810 19:31:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.810 19:31:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.810 19:31:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.810 19:31:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.810 19:31:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.810 19:31:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.810 19:31:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.810 19:31:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.810 19:31:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.810 19:31:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.811 19:31:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.811 19:31:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.811 19:31:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.811 --rc genhtml_branch_coverage=1 00:04:58.811 --rc genhtml_function_coverage=1 00:04:58.811 --rc genhtml_legend=1 00:04:58.811 --rc geninfo_all_blocks=1 00:04:58.811 --rc geninfo_unexecuted_blocks=1 00:04:58.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.811 ' 00:04:58.811 19:31:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.811 --rc genhtml_branch_coverage=1 00:04:58.811 --rc genhtml_function_coverage=1 00:04:58.811 --rc genhtml_legend=1 00:04:58.811 --rc geninfo_all_blocks=1 00:04:58.811 --rc geninfo_unexecuted_blocks=1 00:04:58.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.811 ' 00:04:58.811 19:31:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.811 --rc genhtml_branch_coverage=1 00:04:58.811 --rc genhtml_function_coverage=1 00:04:58.811 --rc genhtml_legend=1 00:04:58.811 --rc geninfo_all_blocks=1 00:04:58.811 --rc geninfo_unexecuted_blocks=1 00:04:58.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.811 ' 00:04:58.811 19:31:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.811 --rc genhtml_branch_coverage=1 00:04:58.811 --rc genhtml_function_coverage=1 00:04:58.811 --rc genhtml_legend=1 00:04:58.811 --rc geninfo_all_blocks=1 00:04:58.811 --rc geninfo_unexecuted_blocks=1 00:04:58.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.811 ' 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:58.811 19:31:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.811 19:31:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.811 19:31:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.811 19:31:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.811 19:31:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.811 19:31:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.811 19:31:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.811 19:31:34 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.811 19:31:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.811 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.811 19:31:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:58.811 WARNING: No tests are enabled so not running JSON configuration tests 00:04:58.811 19:31:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:58.811 00:04:58.811 real 0m0.176s 00:04:58.811 user 0m0.109s 00:04:58.811 sys 0m0.072s 00:04:58.811 19:31:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.811 19:31:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.811 ************************************ 00:04:58.811 END TEST json_config 00:04:58.811 ************************************ 00:04:58.811 19:31:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.811 19:31:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.811 19:31:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.811 19:31:34 -- common/autotest_common.sh@10 -- # set +x 00:04:58.811 ************************************ 00:04:58.811 START TEST json_config_extra_key 00:04:58.811 ************************************ 00:04:58.811 19:31:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.072 19:31:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.072 19:31:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.072 19:31:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.072 19:31:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.072 19:31:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.072 19:31:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.072 19:31:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:59.073 19:31:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.073 19:31:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.073 --rc genhtml_branch_coverage=1 00:04:59.073 --rc genhtml_function_coverage=1 00:04:59.073 --rc genhtml_legend=1 00:04:59.073 --rc geninfo_all_blocks=1 00:04:59.073 --rc geninfo_unexecuted_blocks=1 00:04:59.073 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.073 ' 00:04:59.073 19:31:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.073 --rc genhtml_branch_coverage=1 00:04:59.073 --rc genhtml_function_coverage=1 00:04:59.073 --rc genhtml_legend=1 00:04:59.073 --rc geninfo_all_blocks=1 00:04:59.073 --rc geninfo_unexecuted_blocks=1 00:04:59.073 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.073 ' 00:04:59.073 19:31:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.073 --rc genhtml_branch_coverage=1 00:04:59.073 --rc genhtml_function_coverage=1 00:04:59.073 --rc genhtml_legend=1 00:04:59.073 --rc geninfo_all_blocks=1 00:04:59.073 --rc geninfo_unexecuted_blocks=1 00:04:59.073 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.073 ' 00:04:59.073 19:31:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.073 --rc genhtml_branch_coverage=1 00:04:59.073 --rc genhtml_function_coverage=1 00:04:59.073 --rc genhtml_legend=1 00:04:59.073 --rc geninfo_all_blocks=1 00:04:59.073 --rc geninfo_unexecuted_blocks=1 00:04:59.073 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.073 ' 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.073 19:31:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.073 19:31:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.073 19:31:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.073 19:31:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.073 19:31:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.073 19:31:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.073 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.073 19:31:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.073 INFO: launching applications... 00:04:59.073 19:31:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1108275 00:04:59.073 19:31:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.073 Waiting for target to run... 00:04:59.074 19:31:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1108275 /var/tmp/spdk_tgt.sock 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1108275 ']' 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.074 19:31:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.074 19:31:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.074 [2024-11-26 19:31:34.343545] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:04:59.074 [2024-11-26 19:31:34.343658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108275 ] 00:04:59.641 [2024-11-26 19:31:34.772763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.641 [2024-11-26 19:31:34.829134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.899 19:31:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.899 19:31:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.899 00:04:59.899 19:31:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.899 INFO: shutting down applications... 00:04:59.899 19:31:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1108275 ]] 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1108275 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1108275 00:04:59.899 19:31:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1108275 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.466 19:31:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.466 SPDK target shutdown done 00:05:00.466 19:31:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.466 Success 00:05:00.466 00:05:00.466 real 0m1.586s 00:05:00.466 user 0m1.179s 00:05:00.466 sys 0m0.582s 00:05:00.466 19:31:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.466 19:31:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.466 ************************************ 00:05:00.466 END TEST json_config_extra_key 00:05:00.466 ************************************ 00:05:00.466 19:31:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.466 19:31:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.466 19:31:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.466 19:31:35 -- common/autotest_common.sh@10 -- # set +x 00:05:00.725 ************************************ 00:05:00.725 START TEST alias_rpc 00:05:00.725 ************************************ 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.725 * Looking for test storage... 00:05:00.725 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.725 19:31:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.725 --rc genhtml_branch_coverage=1 00:05:00.725 --rc genhtml_function_coverage=1 00:05:00.725 --rc genhtml_legend=1 00:05:00.725 --rc geninfo_all_blocks=1 00:05:00.725 --rc geninfo_unexecuted_blocks=1 00:05:00.725 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.725 ' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.725 --rc genhtml_branch_coverage=1 00:05:00.725 --rc genhtml_function_coverage=1 00:05:00.725 --rc genhtml_legend=1 00:05:00.725 --rc geninfo_all_blocks=1 00:05:00.725 --rc geninfo_unexecuted_blocks=1 00:05:00.725 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.725 ' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.725 --rc genhtml_branch_coverage=1 00:05:00.725 --rc genhtml_function_coverage=1 00:05:00.725 --rc genhtml_legend=1 00:05:00.725 --rc geninfo_all_blocks=1 00:05:00.725 --rc geninfo_unexecuted_blocks=1 00:05:00.725 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.725 ' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.725 --rc genhtml_branch_coverage=1 00:05:00.725 --rc genhtml_function_coverage=1 00:05:00.725 --rc genhtml_legend=1 00:05:00.725 --rc geninfo_all_blocks=1 00:05:00.725 --rc geninfo_unexecuted_blocks=1 00:05:00.725 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.725 ' 00:05:00.725 19:31:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.725 19:31:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1108603 00:05:00.725 19:31:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1108603 00:05:00.725 19:31:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1108603 ']' 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.725 19:31:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.726 19:31:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.726 19:31:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.726 [2024-11-26 19:31:35.976404] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:00.726 [2024-11-26 19:31:35.976470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108603 ] 00:05:00.983 [2024-11-26 19:31:36.045926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.983 [2024-11-26 19:31:36.084893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.242 19:31:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:01.242 19:31:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1108603 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1108603 ']' 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1108603 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1108603 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1108603' 00:05:01.242 killing process with pid 1108603 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 1108603 00:05:01.242 19:31:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 1108603 00:05:01.809 00:05:01.809 real 0m1.060s 00:05:01.809 user 0m1.058s 00:05:01.809 sys 0m0.414s 00:05:01.809 19:31:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.809 19:31:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.809 ************************************ 00:05:01.809 END TEST alias_rpc 00:05:01.809 ************************************ 00:05:01.809 19:31:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.809 19:31:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.809 19:31:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.809 19:31:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.809 19:31:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.809 ************************************ 00:05:01.809 START TEST spdkcli_tcp 00:05:01.809 ************************************ 00:05:01.809 19:31:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.809 * Looking for test storage... 00:05:01.810 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.810 19:31:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.810 --rc genhtml_branch_coverage=1 00:05:01.810 --rc genhtml_function_coverage=1 00:05:01.810 --rc genhtml_legend=1 00:05:01.810 --rc geninfo_all_blocks=1 00:05:01.810 --rc geninfo_unexecuted_blocks=1 00:05:01.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.810 ' 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.810 --rc genhtml_branch_coverage=1 00:05:01.810 --rc genhtml_function_coverage=1 00:05:01.810 --rc genhtml_legend=1 00:05:01.810 --rc geninfo_all_blocks=1 00:05:01.810 --rc geninfo_unexecuted_blocks=1 00:05:01.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.810 ' 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.810 --rc genhtml_branch_coverage=1 00:05:01.810 --rc genhtml_function_coverage=1 00:05:01.810 --rc genhtml_legend=1 00:05:01.810 --rc geninfo_all_blocks=1 00:05:01.810 --rc geninfo_unexecuted_blocks=1 00:05:01.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.810 ' 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.810 --rc genhtml_branch_coverage=1 00:05:01.810 --rc genhtml_function_coverage=1 00:05:01.810 --rc genhtml_legend=1 00:05:01.810 --rc geninfo_all_blocks=1 00:05:01.810 --rc geninfo_unexecuted_blocks=1 00:05:01.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.810 ' 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.810 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.810 19:31:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.069 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1108926 00:05:02.069 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.069 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1108926 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1108926 ']' 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.069 19:31:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.069 [2024-11-26 19:31:37.147669] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:02.069 [2024-11-26 19:31:37.147733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108926 ] 00:05:02.069 [2024-11-26 19:31:37.218348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.069 [2024-11-26 19:31:37.258979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.069 [2024-11-26 19:31:37.258981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.327 19:31:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.327 19:31:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:02.327 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1108930 00:05:02.328 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.328 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.328 [ 00:05:02.328 "spdk_get_version", 00:05:02.328 "rpc_get_methods", 00:05:02.328 "notify_get_notifications", 00:05:02.328 "notify_get_types", 00:05:02.328 "trace_get_info", 00:05:02.328 "trace_get_tpoint_group_mask", 00:05:02.328 "trace_disable_tpoint_group", 00:05:02.328 "trace_enable_tpoint_group", 00:05:02.328 "trace_clear_tpoint_mask", 00:05:02.328 "trace_set_tpoint_mask", 00:05:02.328 "fsdev_set_opts", 00:05:02.328 "fsdev_get_opts", 00:05:02.328 "framework_get_pci_devices", 00:05:02.328 "framework_get_config", 00:05:02.328 "framework_get_subsystems", 00:05:02.328 "vfu_tgt_set_base_path", 00:05:02.328 "keyring_get_keys", 00:05:02.328 "iobuf_get_stats", 00:05:02.328 "iobuf_set_options", 00:05:02.328 "sock_get_default_impl", 00:05:02.328 "sock_set_default_impl", 00:05:02.328 "sock_impl_set_options", 00:05:02.328 "sock_impl_get_options", 00:05:02.328 "vmd_rescan", 00:05:02.328 "vmd_remove_device", 00:05:02.328 "vmd_enable", 00:05:02.328 "accel_get_stats", 00:05:02.328 "accel_set_options", 00:05:02.328 "accel_set_driver", 00:05:02.328 "accel_crypto_key_destroy", 00:05:02.328 "accel_crypto_keys_get", 00:05:02.328 "accel_crypto_key_create", 00:05:02.328 "accel_assign_opc", 00:05:02.328 "accel_get_module_info", 00:05:02.328 "accel_get_opc_assignments", 00:05:02.328 "bdev_get_histogram", 00:05:02.328 "bdev_enable_histogram", 00:05:02.328 "bdev_set_qos_limit", 00:05:02.328 "bdev_set_qd_sampling_period", 00:05:02.328 "bdev_get_bdevs", 00:05:02.328 "bdev_reset_iostat", 00:05:02.328 "bdev_get_iostat", 00:05:02.328 "bdev_examine", 00:05:02.328 "bdev_wait_for_examine", 00:05:02.328 "bdev_set_options", 00:05:02.328 "scsi_get_devices", 00:05:02.328 "thread_set_cpumask", 00:05:02.328 "scheduler_set_options", 00:05:02.328 "framework_get_governor", 00:05:02.328 "framework_get_scheduler", 00:05:02.328 "framework_set_scheduler", 00:05:02.328 "framework_get_reactors", 00:05:02.328 "thread_get_io_channels", 00:05:02.328 "thread_get_pollers", 00:05:02.328 "thread_get_stats", 00:05:02.328 "framework_monitor_context_switch", 00:05:02.328 "spdk_kill_instance", 00:05:02.328 "log_enable_timestamps", 00:05:02.328 "log_get_flags", 00:05:02.328 "log_clear_flag", 00:05:02.328 "log_set_flag", 00:05:02.328 "log_get_level", 00:05:02.328 "log_set_level", 00:05:02.328 "log_get_print_level", 00:05:02.328 "log_set_print_level", 00:05:02.328 "framework_enable_cpumask_locks", 00:05:02.328 "framework_disable_cpumask_locks", 00:05:02.328 "framework_wait_init", 00:05:02.328 "framework_start_init", 00:05:02.328 "virtio_blk_create_transport", 00:05:02.328 "virtio_blk_get_transports", 00:05:02.328 "vhost_controller_set_coalescing", 00:05:02.328 "vhost_get_controllers", 00:05:02.328 "vhost_delete_controller", 00:05:02.328 "vhost_create_blk_controller", 00:05:02.328 "vhost_scsi_controller_remove_target", 00:05:02.328 "vhost_scsi_controller_add_target", 00:05:02.328 "vhost_start_scsi_controller", 00:05:02.328 "vhost_create_scsi_controller", 00:05:02.328 "ublk_recover_disk", 00:05:02.328 "ublk_get_disks", 00:05:02.328 "ublk_stop_disk", 00:05:02.328 "ublk_start_disk", 00:05:02.328 "ublk_destroy_target", 00:05:02.328 "ublk_create_target", 00:05:02.328 "nbd_get_disks", 00:05:02.328 "nbd_stop_disk", 00:05:02.328 "nbd_start_disk", 00:05:02.328 "env_dpdk_get_mem_stats", 00:05:02.328 "nvmf_stop_mdns_prr", 00:05:02.328 "nvmf_publish_mdns_prr", 00:05:02.328 "nvmf_subsystem_get_listeners", 00:05:02.328 "nvmf_subsystem_get_qpairs", 00:05:02.328 "nvmf_subsystem_get_controllers", 00:05:02.328 "nvmf_get_stats", 00:05:02.328 "nvmf_get_transports", 00:05:02.328 "nvmf_create_transport", 00:05:02.328 "nvmf_get_targets", 00:05:02.328 "nvmf_delete_target", 00:05:02.328 "nvmf_create_target", 00:05:02.328 "nvmf_subsystem_allow_any_host", 00:05:02.328 "nvmf_subsystem_set_keys", 00:05:02.328 "nvmf_subsystem_remove_host", 00:05:02.328 "nvmf_subsystem_add_host", 00:05:02.328 "nvmf_ns_remove_host", 00:05:02.328 "nvmf_ns_add_host", 00:05:02.328 "nvmf_subsystem_remove_ns", 00:05:02.328 "nvmf_subsystem_set_ns_ana_group", 00:05:02.328 "nvmf_subsystem_add_ns", 00:05:02.328 "nvmf_subsystem_listener_set_ana_state", 00:05:02.328 "nvmf_discovery_get_referrals", 00:05:02.328 "nvmf_discovery_remove_referral", 00:05:02.328 "nvmf_discovery_add_referral", 00:05:02.328 "nvmf_subsystem_remove_listener", 00:05:02.328 "nvmf_subsystem_add_listener", 00:05:02.328 "nvmf_delete_subsystem", 00:05:02.328 "nvmf_create_subsystem", 00:05:02.328 "nvmf_get_subsystems", 00:05:02.328 "nvmf_set_crdt", 00:05:02.328 "nvmf_set_config", 00:05:02.328 "nvmf_set_max_subsystems", 00:05:02.328 "iscsi_get_histogram", 00:05:02.328 "iscsi_enable_histogram", 00:05:02.328 "iscsi_set_options", 00:05:02.328 "iscsi_get_auth_groups", 00:05:02.328 "iscsi_auth_group_remove_secret", 00:05:02.328 "iscsi_auth_group_add_secret", 00:05:02.328 "iscsi_delete_auth_group", 00:05:02.328 "iscsi_create_auth_group", 00:05:02.328 "iscsi_set_discovery_auth", 00:05:02.328 "iscsi_get_options", 00:05:02.328 "iscsi_target_node_request_logout", 00:05:02.328 "iscsi_target_node_set_redirect", 00:05:02.328 "iscsi_target_node_set_auth", 00:05:02.328 "iscsi_target_node_add_lun", 00:05:02.328 "iscsi_get_stats", 00:05:02.328 "iscsi_get_connections", 00:05:02.328 "iscsi_portal_group_set_auth", 00:05:02.328 "iscsi_start_portal_group", 00:05:02.328 "iscsi_delete_portal_group", 00:05:02.328 "iscsi_create_portal_group", 00:05:02.328 "iscsi_get_portal_groups", 00:05:02.328 "iscsi_delete_target_node", 00:05:02.328 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.328 "iscsi_target_node_add_pg_ig_maps", 00:05:02.328 "iscsi_create_target_node", 00:05:02.328 "iscsi_get_target_nodes", 00:05:02.328 "iscsi_delete_initiator_group", 00:05:02.328 "iscsi_initiator_group_remove_initiators", 00:05:02.328 "iscsi_initiator_group_add_initiators", 00:05:02.328 "iscsi_create_initiator_group", 00:05:02.328 "iscsi_get_initiator_groups", 00:05:02.328 "fsdev_aio_delete", 00:05:02.328 "fsdev_aio_create", 00:05:02.328 "keyring_linux_set_options", 00:05:02.328 "keyring_file_remove_key", 00:05:02.328 "keyring_file_add_key", 00:05:02.328 "vfu_virtio_create_fs_endpoint", 00:05:02.328 "vfu_virtio_create_scsi_endpoint", 00:05:02.328 "vfu_virtio_scsi_remove_target", 00:05:02.328 "vfu_virtio_scsi_add_target", 00:05:02.328 "vfu_virtio_create_blk_endpoint", 00:05:02.328 "vfu_virtio_delete_endpoint", 00:05:02.328 "iaa_scan_accel_module", 00:05:02.328 "dsa_scan_accel_module", 00:05:02.328 "ioat_scan_accel_module", 00:05:02.328 "accel_error_inject_error", 00:05:02.328 "bdev_iscsi_delete", 00:05:02.328 "bdev_iscsi_create", 00:05:02.328 "bdev_iscsi_set_options", 00:05:02.328 "bdev_virtio_attach_controller", 00:05:02.328 "bdev_virtio_scsi_get_devices", 00:05:02.328 "bdev_virtio_detach_controller", 00:05:02.328 "bdev_virtio_blk_set_hotplug", 00:05:02.328 "bdev_ftl_set_property", 00:05:02.328 "bdev_ftl_get_properties", 00:05:02.328 "bdev_ftl_get_stats", 00:05:02.328 "bdev_ftl_unmap", 00:05:02.328 "bdev_ftl_unload", 00:05:02.328 "bdev_ftl_delete", 00:05:02.328 "bdev_ftl_load", 00:05:02.328 "bdev_ftl_create", 00:05:02.328 "bdev_aio_delete", 00:05:02.328 "bdev_aio_rescan", 00:05:02.328 "bdev_aio_create", 00:05:02.328 "blobfs_create", 00:05:02.328 "blobfs_detect", 00:05:02.328 "blobfs_set_cache_size", 00:05:02.328 "bdev_zone_block_delete", 00:05:02.328 "bdev_zone_block_create", 00:05:02.328 "bdev_delay_delete", 00:05:02.328 "bdev_delay_create", 00:05:02.328 "bdev_delay_update_latency", 00:05:02.328 "bdev_split_delete", 00:05:02.328 "bdev_split_create", 00:05:02.328 "bdev_error_inject_error", 00:05:02.328 "bdev_error_delete", 00:05:02.328 "bdev_error_create", 00:05:02.328 "bdev_raid_set_options", 00:05:02.328 "bdev_raid_remove_base_bdev", 00:05:02.328 "bdev_raid_add_base_bdev", 00:05:02.328 "bdev_raid_delete", 00:05:02.328 "bdev_raid_create", 00:05:02.328 "bdev_raid_get_bdevs", 00:05:02.328 "bdev_lvol_set_parent_bdev", 00:05:02.328 "bdev_lvol_set_parent", 00:05:02.328 "bdev_lvol_check_shallow_copy", 00:05:02.328 "bdev_lvol_start_shallow_copy", 00:05:02.328 "bdev_lvol_grow_lvstore", 00:05:02.328 "bdev_lvol_get_lvols", 00:05:02.328 "bdev_lvol_get_lvstores", 00:05:02.328 "bdev_lvol_delete", 00:05:02.328 "bdev_lvol_set_read_only", 00:05:02.329 "bdev_lvol_resize", 00:05:02.329 "bdev_lvol_decouple_parent", 00:05:02.329 "bdev_lvol_inflate", 00:05:02.329 "bdev_lvol_rename", 00:05:02.329 "bdev_lvol_clone_bdev", 00:05:02.329 "bdev_lvol_clone", 00:05:02.329 "bdev_lvol_snapshot", 00:05:02.329 "bdev_lvol_create", 00:05:02.329 "bdev_lvol_delete_lvstore", 00:05:02.329 "bdev_lvol_rename_lvstore", 00:05:02.329 "bdev_lvol_create_lvstore", 00:05:02.329 "bdev_passthru_delete", 00:05:02.329 "bdev_passthru_create", 00:05:02.329 "bdev_nvme_cuse_unregister", 00:05:02.329 "bdev_nvme_cuse_register", 00:05:02.329 "bdev_opal_new_user", 00:05:02.329 "bdev_opal_set_lock_state", 00:05:02.329 "bdev_opal_delete", 00:05:02.329 "bdev_opal_get_info", 00:05:02.329 "bdev_opal_create", 00:05:02.329 "bdev_nvme_opal_revert", 00:05:02.329 "bdev_nvme_opal_init", 00:05:02.329 "bdev_nvme_send_cmd", 00:05:02.329 "bdev_nvme_set_keys", 00:05:02.329 "bdev_nvme_get_path_iostat", 00:05:02.329 "bdev_nvme_get_mdns_discovery_info", 00:05:02.329 "bdev_nvme_stop_mdns_discovery", 00:05:02.329 "bdev_nvme_start_mdns_discovery", 00:05:02.329 "bdev_nvme_set_multipath_policy", 00:05:02.329 "bdev_nvme_set_preferred_path", 00:05:02.329 "bdev_nvme_get_io_paths", 00:05:02.329 "bdev_nvme_remove_error_injection", 00:05:02.329 "bdev_nvme_add_error_injection", 00:05:02.329 "bdev_nvme_get_discovery_info", 00:05:02.329 "bdev_nvme_stop_discovery", 00:05:02.329 "bdev_nvme_start_discovery", 00:05:02.329 "bdev_nvme_get_controller_health_info", 00:05:02.329 "bdev_nvme_disable_controller", 00:05:02.329 "bdev_nvme_enable_controller", 00:05:02.329 "bdev_nvme_reset_controller", 00:05:02.329 "bdev_nvme_get_transport_statistics", 00:05:02.329 "bdev_nvme_apply_firmware", 00:05:02.329 "bdev_nvme_detach_controller", 00:05:02.329 "bdev_nvme_get_controllers", 00:05:02.329 "bdev_nvme_attach_controller", 00:05:02.329 "bdev_nvme_set_hotplug", 00:05:02.329 "bdev_nvme_set_options", 00:05:02.329 "bdev_null_resize", 00:05:02.329 "bdev_null_delete", 00:05:02.329 "bdev_null_create", 00:05:02.329 "bdev_malloc_delete", 00:05:02.329 "bdev_malloc_create" 00:05:02.329 ] 00:05:02.588 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.588 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.588 19:31:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1108926 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1108926 ']' 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1108926 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1108926 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1108926' 00:05:02.588 killing process with pid 1108926 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1108926 00:05:02.588 19:31:37 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1108926 00:05:02.846 00:05:02.846 real 0m1.123s 00:05:02.846 user 0m1.851s 00:05:02.846 sys 0m0.484s 00:05:02.846 19:31:38 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.846 19:31:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.846 ************************************ 00:05:02.846 END TEST spdkcli_tcp 00:05:02.846 ************************************ 00:05:02.846 19:31:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.846 19:31:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.846 19:31:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.846 19:31:38 -- common/autotest_common.sh@10 -- # set +x 00:05:02.846 ************************************ 00:05:02.846 START TEST dpdk_mem_utility 00:05:02.846 ************************************ 00:05:02.846 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.105 * Looking for test storage... 00:05:03.105 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.105 19:31:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.105 --rc genhtml_branch_coverage=1 00:05:03.105 --rc genhtml_function_coverage=1 00:05:03.105 --rc genhtml_legend=1 00:05:03.105 --rc geninfo_all_blocks=1 00:05:03.105 --rc geninfo_unexecuted_blocks=1 00:05:03.105 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.105 ' 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.105 --rc genhtml_branch_coverage=1 00:05:03.105 --rc genhtml_function_coverage=1 00:05:03.105 --rc genhtml_legend=1 00:05:03.105 --rc geninfo_all_blocks=1 00:05:03.105 --rc geninfo_unexecuted_blocks=1 00:05:03.105 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.105 ' 00:05:03.105 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.105 --rc genhtml_branch_coverage=1 00:05:03.106 --rc genhtml_function_coverage=1 00:05:03.106 --rc genhtml_legend=1 00:05:03.106 --rc geninfo_all_blocks=1 00:05:03.106 --rc geninfo_unexecuted_blocks=1 00:05:03.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.106 ' 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.106 --rc genhtml_branch_coverage=1 00:05:03.106 --rc genhtml_function_coverage=1 00:05:03.106 --rc genhtml_legend=1 00:05:03.106 --rc geninfo_all_blocks=1 00:05:03.106 --rc geninfo_unexecuted_blocks=1 00:05:03.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.106 ' 00:05:03.106 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.106 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1109235 00:05:03.106 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.106 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1109235 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1109235 ']' 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.106 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.106 [2024-11-26 19:31:38.320503] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:03.106 [2024-11-26 19:31:38.320592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109235 ] 00:05:03.106 [2024-11-26 19:31:38.392659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.363 [2024-11-26 19:31:38.435792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.363 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.363 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:03.363 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.364 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.364 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.364 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.364 { 00:05:03.364 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.364 } 00:05:03.364 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.364 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.623 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:03.623 1 heaps totaling size 818.000000 MiB 00:05:03.623 size: 818.000000 MiB heap id: 0 00:05:03.623 end heaps---------- 00:05:03.623 9 mempools totaling size 603.782043 MiB 00:05:03.623 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.623 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.623 size: 100.555481 MiB name: bdev_io_1109235 00:05:03.623 size: 50.003479 MiB name: msgpool_1109235 00:05:03.623 size: 36.509338 MiB name: fsdev_io_1109235 00:05:03.623 size: 21.763794 MiB name: PDU_Pool 00:05:03.623 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.623 size: 4.133484 MiB name: evtpool_1109235 00:05:03.623 size: 0.026123 MiB name: Session_Pool 00:05:03.623 end mempools------- 00:05:03.623 6 memzones totaling size 4.142822 MiB 00:05:03.623 size: 1.000366 MiB name: RG_ring_0_1109235 00:05:03.623 size: 1.000366 MiB name: RG_ring_1_1109235 00:05:03.623 size: 1.000366 MiB name: RG_ring_4_1109235 00:05:03.623 size: 1.000366 MiB name: RG_ring_5_1109235 00:05:03.623 size: 0.125366 MiB name: RG_ring_2_1109235 00:05:03.623 size: 0.015991 MiB name: RG_ring_3_1109235 00:05:03.623 end memzones------- 00:05:03.623 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:03.623 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:03.623 list of free elements. size: 10.852478 MiB 00:05:03.623 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:03.623 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:03.623 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:03.623 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:03.623 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:03.623 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:03.623 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:03.623 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:03.623 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:03.623 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:03.623 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:03.623 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:03.623 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:03.623 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:03.623 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:03.623 list of standard malloc elements. size: 199.218628 MiB 00:05:03.623 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:03.623 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:03.623 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:03.623 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:03.623 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:03.623 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:03.623 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:03.623 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:03.623 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:03.623 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:03.623 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:03.623 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:03.623 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:03.623 list of memzone associated elements. size: 607.928894 MiB 00:05:03.623 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:03.623 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:03.623 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:03.623 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:03.623 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:03.623 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1109235_0 00:05:03.623 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:03.623 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1109235_0 00:05:03.623 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:03.623 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1109235_0 00:05:03.623 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:03.623 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:03.623 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:03.623 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:03.623 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:03.623 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1109235_0 00:05:03.623 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:03.623 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1109235 00:05:03.623 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:03.623 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1109235 00:05:03.623 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:03.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:03.623 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:03.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:03.623 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:03.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:03.623 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:03.623 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:03.623 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:03.623 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1109235 00:05:03.623 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:03.623 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1109235 00:05:03.623 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:03.623 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1109235 00:05:03.623 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:03.623 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1109235 00:05:03.623 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:03.623 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1109235 00:05:03.623 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:03.623 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1109235 00:05:03.623 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:03.624 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:03.624 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:03.624 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:03.624 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:03.624 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:03.624 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:03.624 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1109235 00:05:03.624 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:03.624 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1109235 00:05:03.624 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:03.624 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:03.624 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:03.624 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:03.624 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:03.624 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1109235 00:05:03.624 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:03.624 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:03.624 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:03.624 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1109235 00:05:03.624 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:03.624 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1109235 00:05:03.624 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:03.624 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1109235 00:05:03.624 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:03.624 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:03.624 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:03.624 19:31:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1109235 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1109235 ']' 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1109235 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109235 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109235' 00:05:03.624 killing process with pid 1109235 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1109235 00:05:03.624 19:31:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1109235 00:05:03.883 00:05:03.883 real 0m1.002s 00:05:03.883 user 0m0.918s 00:05:03.883 sys 0m0.443s 00:05:03.883 19:31:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.883 19:31:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.883 ************************************ 00:05:03.883 END TEST dpdk_mem_utility 00:05:03.883 ************************************ 00:05:03.883 19:31:39 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:03.883 19:31:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.883 19:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.883 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 START TEST event 00:05:04.142 ************************************ 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:04.142 * Looking for test storage... 00:05:04.142 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.142 19:31:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.142 19:31:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.142 19:31:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.142 19:31:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.142 19:31:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.142 19:31:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.142 19:31:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.142 19:31:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.142 19:31:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.142 19:31:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.142 19:31:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.142 19:31:39 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.142 19:31:39 event -- scripts/common.sh@345 -- # : 1 00:05:04.142 19:31:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.142 19:31:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.142 19:31:39 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.142 19:31:39 event -- scripts/common.sh@353 -- # local d=1 00:05:04.142 19:31:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.142 19:31:39 event -- scripts/common.sh@355 -- # echo 1 00:05:04.142 19:31:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.142 19:31:39 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.142 19:31:39 event -- scripts/common.sh@353 -- # local d=2 00:05:04.142 19:31:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.142 19:31:39 event -- scripts/common.sh@355 -- # echo 2 00:05:04.142 19:31:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.142 19:31:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.142 19:31:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.142 19:31:39 event -- scripts/common.sh@368 -- # return 0 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.142 ' 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.142 ' 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.142 ' 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.142 ' 00:05:04.142 19:31:39 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.142 19:31:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.142 19:31:39 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.142 19:31:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:04.143 19:31:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.143 19:31:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.143 ************************************ 00:05:04.143 START TEST event_perf 00:05:04.143 ************************************ 00:05:04.143 19:31:39 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.143 Running I/O for 1 seconds...[2024-11-26 19:31:39.438181] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:04.143 [2024-11-26 19:31:39.438263] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109357 ] 00:05:04.400 [2024-11-26 19:31:39.513043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.400 [2024-11-26 19:31:39.558501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.400 [2024-11-26 19:31:39.558605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.400 [2024-11-26 19:31:39.558703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.400 [2024-11-26 19:31:39.558706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.335 Running I/O for 1 seconds... 00:05:05.335 lcore 0: 194349 00:05:05.335 lcore 1: 194349 00:05:05.335 lcore 2: 194350 00:05:05.335 lcore 3: 194351 00:05:05.335 done. 00:05:05.335 00:05:05.335 real 0m1.178s 00:05:05.335 user 0m4.092s 00:05:05.335 sys 0m0.083s 00:05:05.335 19:31:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.335 19:31:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.335 ************************************ 00:05:05.335 END TEST event_perf 00:05:05.335 ************************************ 00:05:05.335 19:31:40 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.335 19:31:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.335 19:31:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.335 19:31:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.594 ************************************ 00:05:05.594 START TEST event_reactor 00:05:05.594 ************************************ 00:05:05.594 19:31:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.594 [2024-11-26 19:31:40.699471] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:05.594 [2024-11-26 19:31:40.699575] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109627 ] 00:05:05.594 [2024-11-26 19:31:40.774450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.594 [2024-11-26 19:31:40.815396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.966 test_start 00:05:06.966 oneshot 00:05:06.966 tick 100 00:05:06.966 tick 100 00:05:06.966 tick 250 00:05:06.966 tick 100 00:05:06.966 tick 100 00:05:06.966 tick 100 00:05:06.966 tick 250 00:05:06.966 tick 500 00:05:06.966 tick 100 00:05:06.966 tick 100 00:05:06.966 tick 250 00:05:06.966 tick 100 00:05:06.966 tick 100 00:05:06.966 test_end 00:05:06.966 00:05:06.966 real 0m1.170s 00:05:06.966 user 0m1.090s 00:05:06.966 sys 0m0.076s 00:05:06.966 19:31:41 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.966 19:31:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.966 ************************************ 00:05:06.966 END TEST event_reactor 00:05:06.966 ************************************ 00:05:06.966 19:31:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.966 19:31:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.966 19:31:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.966 19:31:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.966 ************************************ 00:05:06.966 START TEST event_reactor_perf 00:05:06.966 ************************************ 00:05:06.966 19:31:41 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.966 [2024-11-26 19:31:41.950717] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:06.966 [2024-11-26 19:31:41.950798] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109907 ] 00:05:06.966 [2024-11-26 19:31:42.023977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.966 [2024-11-26 19:31:42.063489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.900 test_start 00:05:07.900 test_end 00:05:07.900 Performance: 953406 events per second 00:05:07.900 00:05:07.900 real 0m1.165s 00:05:07.900 user 0m1.077s 00:05:07.900 sys 0m0.084s 00:05:07.900 19:31:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.900 19:31:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.900 ************************************ 00:05:07.900 END TEST event_reactor_perf 00:05:07.900 ************************************ 00:05:07.900 19:31:43 event -- event/event.sh@49 -- # uname -s 00:05:07.900 19:31:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:07.900 19:31:43 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:07.900 19:31:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.900 19:31:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.900 19:31:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.900 ************************************ 00:05:07.900 START TEST event_scheduler 00:05:07.900 ************************************ 00:05:07.900 19:31:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.159 * Looking for test storage... 00:05:08.159 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.159 19:31:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.159 19:31:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.159 --rc genhtml_branch_coverage=1 00:05:08.159 --rc genhtml_function_coverage=1 00:05:08.160 --rc genhtml_legend=1 00:05:08.160 --rc geninfo_all_blocks=1 00:05:08.160 --rc geninfo_unexecuted_blocks=1 00:05:08.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.160 ' 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.160 --rc genhtml_branch_coverage=1 00:05:08.160 --rc genhtml_function_coverage=1 00:05:08.160 --rc genhtml_legend=1 00:05:08.160 --rc geninfo_all_blocks=1 00:05:08.160 --rc geninfo_unexecuted_blocks=1 00:05:08.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.160 ' 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.160 --rc genhtml_branch_coverage=1 00:05:08.160 --rc genhtml_function_coverage=1 00:05:08.160 --rc genhtml_legend=1 00:05:08.160 --rc geninfo_all_blocks=1 00:05:08.160 --rc geninfo_unexecuted_blocks=1 00:05:08.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.160 ' 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.160 --rc genhtml_branch_coverage=1 00:05:08.160 --rc genhtml_function_coverage=1 00:05:08.160 --rc genhtml_legend=1 00:05:08.160 --rc geninfo_all_blocks=1 00:05:08.160 --rc geninfo_unexecuted_blocks=1 00:05:08.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.160 ' 00:05:08.160 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.160 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1110234 00:05:08.160 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.160 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.160 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1110234 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1110234 ']' 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.160 19:31:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.160 [2024-11-26 19:31:43.396824] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:08.160 [2024-11-26 19:31:43.396892] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110234 ] 00:05:08.160 [2024-11-26 19:31:43.465121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.419 [2024-11-26 19:31:43.508089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.419 [2024-11-26 19:31:43.508176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.419 [2024-11-26 19:31:43.508270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.419 [2024-11-26 19:31:43.508272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:08.419 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 [2024-11-26 19:31:43.572944] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:08.419 [2024-11-26 19:31:43.572967] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:08.419 [2024-11-26 19:31:43.572979] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:08.419 [2024-11-26 19:31:43.572986] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:08.419 [2024-11-26 19:31:43.572994] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 [2024-11-26 19:31:43.647402] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 ************************************ 00:05:08.419 START TEST scheduler_create_thread 00:05:08.419 ************************************ 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 2 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 3 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 4 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 5 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.419 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 6 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.679 7 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.679 8 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.679 9 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.679 10 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.679 19:31:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.246 19:31:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.246 19:31:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.246 19:31:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.246 19:31:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.621 19:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.621 19:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.621 19:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.621 19:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.621 19:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.556 19:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.556 00:05:11.556 real 0m3.096s 00:05:11.556 user 0m0.021s 00:05:11.556 sys 0m0.010s 00:05:11.556 19:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.556 19:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.556 ************************************ 00:05:11.556 END TEST scheduler_create_thread 00:05:11.556 ************************************ 00:05:11.556 19:31:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:11.556 19:31:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1110234 00:05:11.556 19:31:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1110234 ']' 00:05:11.556 19:31:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1110234 00:05:11.556 19:31:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:11.556 19:31:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.556 19:31:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110234 00:05:11.815 19:31:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:11.815 19:31:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:11.815 19:31:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110234' 00:05:11.815 killing process with pid 1110234 00:05:11.815 19:31:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1110234 00:05:11.815 19:31:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1110234 00:05:12.074 [2024-11-26 19:31:47.158633] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.074 00:05:12.074 real 0m4.160s 00:05:12.074 user 0m6.667s 00:05:12.074 sys 0m0.425s 00:05:12.074 19:31:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.074 19:31:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.074 ************************************ 00:05:12.074 END TEST event_scheduler 00:05:12.074 ************************************ 00:05:12.333 19:31:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.333 19:31:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.333 19:31:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.333 19:31:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.333 19:31:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.333 ************************************ 00:05:12.333 START TEST app_repeat 00:05:12.333 ************************************ 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1110977 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1110977' 00:05:12.333 Process app_repeat pid: 1110977 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.333 spdk_app_start Round 0 00:05:12.333 19:31:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1110977 /var/tmp/spdk-nbd.sock 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1110977 ']' 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.333 19:31:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.333 [2024-11-26 19:31:47.462331] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:12.333 [2024-11-26 19:31:47.462404] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110977 ] 00:05:12.333 [2024-11-26 19:31:47.537740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.333 [2024-11-26 19:31:47.578752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.333 [2024-11-26 19:31:47.578754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.592 19:31:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.592 19:31:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.592 19:31:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.592 Malloc0 00:05:12.592 19:31:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.850 Malloc1 00:05:12.850 19:31:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.850 19:31:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.109 /dev/nbd0 00:05:13.109 19:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.109 19:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.109 1+0 records in 00:05:13.109 1+0 records out 00:05:13.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170099 s, 24.1 MB/s 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.109 19:31:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.109 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.109 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.109 19:31:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.367 /dev/nbd1 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.367 1+0 records in 00:05:13.367 1+0 records out 00:05:13.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168264 s, 24.3 MB/s 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.367 19:31:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.367 19:31:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.368 19:31:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.626 { 00:05:13.626 "nbd_device": "/dev/nbd0", 00:05:13.626 "bdev_name": "Malloc0" 00:05:13.626 }, 00:05:13.626 { 00:05:13.626 "nbd_device": "/dev/nbd1", 00:05:13.626 "bdev_name": "Malloc1" 00:05:13.626 } 00:05:13.626 ]' 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.626 { 00:05:13.626 "nbd_device": "/dev/nbd0", 00:05:13.626 "bdev_name": "Malloc0" 00:05:13.626 }, 00:05:13.626 { 00:05:13.626 "nbd_device": "/dev/nbd1", 00:05:13.626 "bdev_name": "Malloc1" 00:05:13.626 } 00:05:13.626 ]' 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.626 /dev/nbd1' 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.626 /dev/nbd1' 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.626 19:31:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.627 256+0 records in 00:05:13.627 256+0 records out 00:05:13.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103536 s, 101 MB/s 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.627 256+0 records in 00:05:13.627 256+0 records out 00:05:13.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196078 s, 53.5 MB/s 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.627 256+0 records in 00:05:13.627 256+0 records out 00:05:13.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210795 s, 49.7 MB/s 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.627 19:31:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.886 19:31:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.144 19:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.402 19:31:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.402 19:31:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.661 19:31:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.661 [2024-11-26 19:31:49.885332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.661 [2024-11-26 19:31:49.921331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.661 [2024-11-26 19:31:49.921333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.661 [2024-11-26 19:31:49.962061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.661 [2024-11-26 19:31:49.962110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.949 19:31:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.949 19:31:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.949 spdk_app_start Round 1 00:05:17.949 19:31:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1110977 /var/tmp/spdk-nbd.sock 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1110977 ']' 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.949 19:31:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.950 19:31:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.950 19:31:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.950 Malloc0 00:05:17.950 19:31:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.209 Malloc1 00:05:18.209 19:31:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.209 /dev/nbd0 00:05:18.209 19:31:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.467 19:31:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.467 19:31:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.467 1+0 records in 00:05:18.468 1+0 records out 00:05:18.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223355 s, 18.3 MB/s 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.468 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.468 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.468 19:31:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.468 /dev/nbd1 00:05:18.468 19:31:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.468 19:31:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.468 19:31:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.468 1+0 records in 00:05:18.468 1+0 records out 00:05:18.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262769 s, 15.6 MB/s 00:05:18.726 19:31:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.726 19:31:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.726 19:31:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.726 19:31:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.726 19:31:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd0", 00:05:18.726 "bdev_name": "Malloc0" 00:05:18.726 }, 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd1", 00:05:18.726 "bdev_name": "Malloc1" 00:05:18.726 } 00:05:18.726 ]' 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd0", 00:05:18.726 "bdev_name": "Malloc0" 00:05:18.726 }, 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd1", 00:05:18.726 "bdev_name": "Malloc1" 00:05:18.726 } 00:05:18.726 ]' 00:05:18.726 19:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.726 /dev/nbd1' 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.726 /dev/nbd1' 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.726 256+0 records in 00:05:18.726 256+0 records out 00:05:18.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110734 s, 94.7 MB/s 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.726 19:31:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.986 256+0 records in 00:05:18.986 256+0 records out 00:05:18.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019884 s, 52.7 MB/s 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.986 256+0 records in 00:05:18.986 256+0 records out 00:05:18.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021653 s, 48.4 MB/s 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.986 19:31:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.245 19:31:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.504 19:31:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.504 19:31:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.762 19:31:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.021 [2024-11-26 19:31:55.121863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.021 [2024-11-26 19:31:55.157858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.021 [2024-11-26 19:31:55.157862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.021 [2024-11-26 19:31:55.199064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.021 [2024-11-26 19:31:55.199114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.730 19:31:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.730 19:31:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.730 spdk_app_start Round 2 00:05:22.730 19:31:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1110977 /var/tmp/spdk-nbd.sock 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1110977 ']' 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.730 19:31:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.989 19:31:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.989 19:31:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.989 19:31:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.249 Malloc0 00:05:23.249 19:31:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.249 Malloc1 00:05:23.509 19:31:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.509 /dev/nbd0 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.509 1+0 records in 00:05:23.509 1+0 records out 00:05:23.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211509 s, 19.4 MB/s 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.509 19:31:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.509 19:31:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.769 /dev/nbd1 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.769 1+0 records in 00:05:23.769 1+0 records out 00:05:23.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167658 s, 24.4 MB/s 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.769 19:31:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.769 19:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.028 19:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.028 { 00:05:24.028 "nbd_device": "/dev/nbd0", 00:05:24.028 "bdev_name": "Malloc0" 00:05:24.028 }, 00:05:24.028 { 00:05:24.028 "nbd_device": "/dev/nbd1", 00:05:24.029 "bdev_name": "Malloc1" 00:05:24.029 } 00:05:24.029 ]' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.029 { 00:05:24.029 "nbd_device": "/dev/nbd0", 00:05:24.029 "bdev_name": "Malloc0" 00:05:24.029 }, 00:05:24.029 { 00:05:24.029 "nbd_device": "/dev/nbd1", 00:05:24.029 "bdev_name": "Malloc1" 00:05:24.029 } 00:05:24.029 ]' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.029 /dev/nbd1' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.029 /dev/nbd1' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.029 256+0 records in 00:05:24.029 256+0 records out 00:05:24.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111001 s, 94.5 MB/s 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.029 256+0 records in 00:05:24.029 256+0 records out 00:05:24.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197334 s, 53.1 MB/s 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.029 19:31:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.288 256+0 records in 00:05:24.288 256+0 records out 00:05:24.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216074 s, 48.5 MB/s 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.288 19:31:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.547 19:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.805 19:32:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.805 19:32:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.065 19:32:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.324 [2024-11-26 19:32:00.417678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.324 [2024-11-26 19:32:00.454876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.324 [2024-11-26 19:32:00.454878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.324 [2024-11-26 19:32:00.495177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.324 [2024-11-26 19:32:00.495224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.615 19:32:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1110977 /var/tmp/spdk-nbd.sock 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1110977 ']' 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.615 19:32:03 event.app_repeat -- event/event.sh@39 -- # killprocess 1110977 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1110977 ']' 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1110977 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110977 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110977' 00:05:28.615 killing process with pid 1110977 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1110977 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1110977 00:05:28.615 spdk_app_start is called in Round 0. 00:05:28.615 Shutdown signal received, stop current app iteration 00:05:28.615 Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 reinitialization... 00:05:28.615 spdk_app_start is called in Round 1. 00:05:28.615 Shutdown signal received, stop current app iteration 00:05:28.615 Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 reinitialization... 00:05:28.615 spdk_app_start is called in Round 2. 00:05:28.615 Shutdown signal received, stop current app iteration 00:05:28.615 Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 reinitialization... 00:05:28.615 spdk_app_start is called in Round 3. 00:05:28.615 Shutdown signal received, stop current app iteration 00:05:28.615 19:32:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.615 19:32:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.615 00:05:28.615 real 0m16.212s 00:05:28.615 user 0m34.796s 00:05:28.615 sys 0m3.217s 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.615 19:32:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.615 ************************************ 00:05:28.615 END TEST app_repeat 00:05:28.615 ************************************ 00:05:28.615 19:32:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.615 19:32:03 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:28.615 19:32:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.615 19:32:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.615 19:32:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.615 ************************************ 00:05:28.615 START TEST cpu_locks 00:05:28.615 ************************************ 00:05:28.615 19:32:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:28.615 * Looking for test storage... 00:05:28.615 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.616 19:32:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.616 --rc genhtml_branch_coverage=1 00:05:28.616 --rc genhtml_function_coverage=1 00:05:28.616 --rc genhtml_legend=1 00:05:28.616 --rc geninfo_all_blocks=1 00:05:28.616 --rc geninfo_unexecuted_blocks=1 00:05:28.616 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.616 ' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.616 --rc genhtml_branch_coverage=1 00:05:28.616 --rc genhtml_function_coverage=1 00:05:28.616 --rc genhtml_legend=1 00:05:28.616 --rc geninfo_all_blocks=1 00:05:28.616 --rc geninfo_unexecuted_blocks=1 00:05:28.616 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.616 ' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.616 --rc genhtml_branch_coverage=1 00:05:28.616 --rc genhtml_function_coverage=1 00:05:28.616 --rc genhtml_legend=1 00:05:28.616 --rc geninfo_all_blocks=1 00:05:28.616 --rc geninfo_unexecuted_blocks=1 00:05:28.616 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.616 ' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.616 --rc genhtml_branch_coverage=1 00:05:28.616 --rc genhtml_function_coverage=1 00:05:28.616 --rc genhtml_legend=1 00:05:28.616 --rc geninfo_all_blocks=1 00:05:28.616 --rc geninfo_unexecuted_blocks=1 00:05:28.616 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.616 ' 00:05:28.616 19:32:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.616 19:32:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.616 19:32:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.616 19:32:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.616 19:32:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.876 ************************************ 00:05:28.876 START TEST default_locks 00:05:28.876 ************************************ 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1113994 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1113994 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1113994 ']' 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.876 19:32:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.876 [2024-11-26 19:32:03.979652] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:28.876 [2024-11-26 19:32:03.979734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113994 ] 00:05:28.876 [2024-11-26 19:32:04.050881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.876 [2024-11-26 19:32:04.092789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.135 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.135 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.135 19:32:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1113994 00:05:29.135 19:32:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1113994 00:05:29.135 19:32:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.704 lslocks: write error 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1113994 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1113994 ']' 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1113994 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1113994 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1113994' 00:05:29.704 killing process with pid 1113994 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1113994 00:05:29.704 19:32:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1113994 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1113994 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1113994 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1113994 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1113994 ']' 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.965 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1113994) - No such process 00:05:29.965 ERROR: process (pid: 1113994) is no longer running 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.965 00:05:29.965 real 0m1.183s 00:05:29.965 user 0m1.161s 00:05:29.965 sys 0m0.554s 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.965 19:32:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.965 ************************************ 00:05:29.965 END TEST default_locks 00:05:29.965 ************************************ 00:05:29.965 19:32:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.965 19:32:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.965 19:32:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.965 19:32:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.965 ************************************ 00:05:29.965 START TEST default_locks_via_rpc 00:05:29.965 ************************************ 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1114285 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1114285 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1114285 ']' 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.965 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.965 [2024-11-26 19:32:05.234888] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:29.965 [2024-11-26 19:32:05.234943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114285 ] 00:05:30.225 [2024-11-26 19:32:05.305461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.225 [2024-11-26 19:32:05.347395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1114285 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1114285 00:05:30.484 19:32:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1114285 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1114285 ']' 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1114285 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.742 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114285 00:05:31.000 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.001 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.001 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114285' 00:05:31.001 killing process with pid 1114285 00:05:31.001 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1114285 00:05:31.001 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1114285 00:05:31.259 00:05:31.259 real 0m1.191s 00:05:31.259 user 0m1.175s 00:05:31.259 sys 0m0.556s 00:05:31.259 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.259 19:32:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.259 ************************************ 00:05:31.259 END TEST default_locks_via_rpc 00:05:31.259 ************************************ 00:05:31.259 19:32:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.259 19:32:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.259 19:32:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.259 19:32:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.259 ************************************ 00:05:31.259 START TEST non_locking_app_on_locked_coremask 00:05:31.259 ************************************ 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1114573 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1114573 /var/tmp/spdk.sock 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1114573 ']' 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.259 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.259 [2024-11-26 19:32:06.514684] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:31.259 [2024-11-26 19:32:06.514762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114573 ] 00:05:31.519 [2024-11-26 19:32:06.587513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.519 [2024-11-26 19:32:06.629338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1114584 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1114584 /var/tmp/spdk2.sock 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:31.777 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1114584 ']' 00:05:31.778 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.778 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.778 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.778 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.778 19:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.778 [2024-11-26 19:32:06.859719] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:31.778 [2024-11-26 19:32:06.859780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114584 ] 00:05:31.778 [2024-11-26 19:32:06.954859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.778 [2024-11-26 19:32:06.954891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.778 [2024-11-26 19:32:07.034452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.710 19:32:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.710 19:32:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.710 19:32:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1114573 00:05:32.710 19:32:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1114573 00:05:32.710 19:32:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.645 lslocks: write error 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1114573 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1114573 ']' 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1114573 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114573 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114573' 00:05:33.645 killing process with pid 1114573 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1114573 00:05:33.645 19:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1114573 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1114584 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1114584 ']' 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1114584 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114584 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114584' 00:05:34.212 killing process with pid 1114584 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1114584 00:05:34.212 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1114584 00:05:34.471 00:05:34.471 real 0m3.169s 00:05:34.471 user 0m3.333s 00:05:34.471 sys 0m1.194s 00:05:34.471 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.471 19:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.471 ************************************ 00:05:34.471 END TEST non_locking_app_on_locked_coremask 00:05:34.471 ************************************ 00:05:34.471 19:32:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:34.471 19:32:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.471 19:32:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.471 19:32:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.471 ************************************ 00:05:34.471 START TEST locking_app_on_unlocked_coremask 00:05:34.471 ************************************ 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1115148 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1115148 /var/tmp/spdk.sock 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1115148 ']' 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.471 19:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.471 [2024-11-26 19:32:09.762007] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:34.471 [2024-11-26 19:32:09.762070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115148 ] 00:05:34.729 [2024-11-26 19:32:09.832887] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.729 [2024-11-26 19:32:09.832915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.729 [2024-11-26 19:32:09.875356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1115159 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1115159 /var/tmp/spdk2.sock 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1115159 ']' 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.988 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.988 [2024-11-26 19:32:10.115954] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:34.989 [2024-11-26 19:32:10.116037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115159 ] 00:05:34.989 [2024-11-26 19:32:10.219378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.247 [2024-11-26 19:32:10.307366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.813 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.813 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.813 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1115159 00:05:35.813 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.813 19:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1115159 00:05:36.751 lslocks: write error 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1115148 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1115148 ']' 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1115148 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.751 19:32:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115148 00:05:36.751 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.751 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.751 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115148' 00:05:36.751 killing process with pid 1115148 00:05:36.751 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1115148 00:05:36.751 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1115148 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1115159 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1115159 ']' 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1115159 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.320 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115159 00:05:37.580 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.580 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.580 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115159' 00:05:37.580 killing process with pid 1115159 00:05:37.580 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1115159 00:05:37.580 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1115159 00:05:37.840 00:05:37.840 real 0m3.237s 00:05:37.840 user 0m3.413s 00:05:37.840 sys 0m1.254s 00:05:37.840 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.840 19:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 ************************************ 00:05:37.840 END TEST locking_app_on_unlocked_coremask 00:05:37.840 ************************************ 00:05:37.840 19:32:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:37.840 19:32:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.840 19:32:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.840 19:32:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 ************************************ 00:05:37.840 START TEST locking_app_on_locked_coremask 00:05:37.840 ************************************ 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1115733 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1115733 /var/tmp/spdk.sock 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1115733 ']' 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.840 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 [2024-11-26 19:32:13.077874] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:37.840 [2024-11-26 19:32:13.077939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115733 ] 00:05:37.840 [2024-11-26 19:32:13.147794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.100 [2024-11-26 19:32:13.190559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1115743 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1115743 /var/tmp/spdk2.sock 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1115743 /var/tmp/spdk2.sock 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.100 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1115743 /var/tmp/spdk2.sock 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1115743 ']' 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.101 19:32:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.360 [2024-11-26 19:32:13.419787] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:38.360 [2024-11-26 19:32:13.419853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115743 ] 00:05:38.360 [2024-11-26 19:32:13.518008] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1115733 has claimed it. 00:05:38.360 [2024-11-26 19:32:13.518047] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.929 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1115743) - No such process 00:05:38.929 ERROR: process (pid: 1115743) is no longer running 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1115733 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1115733 00:05:38.929 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.497 lslocks: write error 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1115733 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1115733 ']' 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1115733 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115733 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115733' 00:05:39.497 killing process with pid 1115733 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1115733 00:05:39.497 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1115733 00:05:39.756 00:05:39.756 real 0m1.920s 00:05:39.756 user 0m2.053s 00:05:39.756 sys 0m0.685s 00:05:39.756 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.756 19:32:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.756 ************************************ 00:05:39.756 END TEST locking_app_on_locked_coremask 00:05:39.756 ************************************ 00:05:39.756 19:32:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.756 19:32:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.756 19:32:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.756 19:32:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.756 ************************************ 00:05:39.756 START TEST locking_overlapped_coremask 00:05:39.756 ************************************ 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1116073 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1116073 /var/tmp/spdk.sock 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1116073 ']' 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.756 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.015 [2024-11-26 19:32:15.079374] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:40.015 [2024-11-26 19:32:15.079437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116073 ] 00:05:40.015 [2024-11-26 19:32:15.149741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.015 [2024-11-26 19:32:15.194630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.015 [2024-11-26 19:32:15.194679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.015 [2024-11-26 19:32:15.194682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1116265 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1116265 /var/tmp/spdk2.sock 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1116265 /var/tmp/spdk2.sock 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1116265 /var/tmp/spdk2.sock 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1116265 ']' 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.275 19:32:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.275 [2024-11-26 19:32:15.433006] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:40.275 [2024-11-26 19:32:15.433074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116265 ] 00:05:40.275 [2024-11-26 19:32:15.528990] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1116073 has claimed it. 00:05:40.275 [2024-11-26 19:32:15.529023] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.843 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1116265) - No such process 00:05:40.843 ERROR: process (pid: 1116265) is no longer running 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.843 19:32:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1116073 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1116073 ']' 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1116073 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.844 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116073 00:05:41.103 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.103 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.103 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116073' 00:05:41.103 killing process with pid 1116073 00:05:41.103 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1116073 00:05:41.103 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1116073 00:05:41.362 00:05:41.362 real 0m1.412s 00:05:41.362 user 0m3.930s 00:05:41.362 sys 0m0.416s 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.362 ************************************ 00:05:41.362 END TEST locking_overlapped_coremask 00:05:41.362 ************************************ 00:05:41.362 19:32:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:41.362 19:32:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.362 19:32:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.362 19:32:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.362 ************************************ 00:05:41.362 START TEST locking_overlapped_coremask_via_rpc 00:05:41.362 ************************************ 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1116352 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1116352 /var/tmp/spdk.sock 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1116352 ']' 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.362 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.362 [2024-11-26 19:32:16.567125] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:41.362 [2024-11-26 19:32:16.567184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116352 ] 00:05:41.362 [2024-11-26 19:32:16.637517] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.362 [2024-11-26 19:32:16.637543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.621 [2024-11-26 19:32:16.682387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.621 [2024-11-26 19:32:16.682483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.621 [2024-11-26 19:32:16.682486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1116532 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1116532 /var/tmp/spdk2.sock 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1116532 ']' 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.621 19:32:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.621 [2024-11-26 19:32:16.915624] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:41.621 [2024-11-26 19:32:16.915690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116532 ] 00:05:41.881 [2024-11-26 19:32:17.017944] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.881 [2024-11-26 19:32:17.017978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.881 [2024-11-26 19:32:17.112652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.881 [2024-11-26 19:32:17.112801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.881 [2024-11-26 19:32:17.112803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.819 [2024-11-26 19:32:17.807668] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1116352 has claimed it. 00:05:42.819 request: 00:05:42.819 { 00:05:42.819 "method": "framework_enable_cpumask_locks", 00:05:42.819 "req_id": 1 00:05:42.819 } 00:05:42.819 Got JSON-RPC error response 00:05:42.819 response: 00:05:42.819 { 00:05:42.819 "code": -32603, 00:05:42.819 "message": "Failed to claim CPU core: 2" 00:05:42.819 } 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1116352 /var/tmp/spdk.sock 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1116352 ']' 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.819 19:32:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1116532 /var/tmp/spdk2.sock 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1116532 ']' 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.819 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.080 00:05:43.080 real 0m1.666s 00:05:43.080 user 0m0.791s 00:05:43.080 sys 0m0.158s 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.080 19:32:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.080 ************************************ 00:05:43.080 END TEST locking_overlapped_coremask_via_rpc 00:05:43.080 ************************************ 00:05:43.080 19:32:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:43.080 19:32:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1116352 ]] 00:05:43.080 19:32:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1116352 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1116352 ']' 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1116352 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116352 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116352' 00:05:43.080 killing process with pid 1116352 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1116352 00:05:43.080 19:32:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1116352 00:05:43.339 19:32:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1116532 ]] 00:05:43.339 19:32:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1116532 00:05:43.339 19:32:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1116532 ']' 00:05:43.339 19:32:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1116532 00:05:43.339 19:32:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.339 19:32:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.339 19:32:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116532 00:05:43.597 19:32:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:43.597 19:32:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:43.597 19:32:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116532' 00:05:43.597 killing process with pid 1116532 00:05:43.597 19:32:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1116532 00:05:43.597 19:32:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1116532 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1116352 ]] 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1116352 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1116352 ']' 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1116352 00:05:43.856 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1116352) - No such process 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1116352 is not found' 00:05:43.856 Process with pid 1116352 is not found 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1116532 ]] 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1116532 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1116532 ']' 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1116532 00:05:43.856 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1116532) - No such process 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1116532 is not found' 00:05:43.856 Process with pid 1116532 is not found 00:05:43.856 19:32:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.856 00:05:43.856 real 0m15.264s 00:05:43.856 user 0m25.635s 00:05:43.856 sys 0m5.911s 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.856 19:32:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.856 ************************************ 00:05:43.856 END TEST cpu_locks 00:05:43.856 ************************************ 00:05:43.856 00:05:43.856 real 0m39.834s 00:05:43.856 user 1m13.627s 00:05:43.856 sys 0m10.259s 00:05:43.856 19:32:19 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.856 19:32:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.856 ************************************ 00:05:43.856 END TEST event 00:05:43.856 ************************************ 00:05:43.856 19:32:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:43.856 19:32:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.856 19:32:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.856 19:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:43.856 ************************************ 00:05:43.856 START TEST thread 00:05:43.856 ************************************ 00:05:43.856 19:32:19 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:44.121 * Looking for test storage... 00:05:44.121 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.122 19:32:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.122 19:32:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.122 19:32:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.122 19:32:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.122 19:32:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.122 19:32:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.122 19:32:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.122 19:32:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.122 19:32:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.122 19:32:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.122 19:32:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.122 19:32:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:44.122 19:32:19 thread -- scripts/common.sh@345 -- # : 1 00:05:44.122 19:32:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.122 19:32:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.122 19:32:19 thread -- scripts/common.sh@365 -- # decimal 1 00:05:44.122 19:32:19 thread -- scripts/common.sh@353 -- # local d=1 00:05:44.122 19:32:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.122 19:32:19 thread -- scripts/common.sh@355 -- # echo 1 00:05:44.122 19:32:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.122 19:32:19 thread -- scripts/common.sh@366 -- # decimal 2 00:05:44.122 19:32:19 thread -- scripts/common.sh@353 -- # local d=2 00:05:44.122 19:32:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.122 19:32:19 thread -- scripts/common.sh@355 -- # echo 2 00:05:44.122 19:32:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.122 19:32:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.122 19:32:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.122 19:32:19 thread -- scripts/common.sh@368 -- # return 0 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.122 --rc genhtml_branch_coverage=1 00:05:44.122 --rc genhtml_function_coverage=1 00:05:44.122 --rc genhtml_legend=1 00:05:44.122 --rc geninfo_all_blocks=1 00:05:44.122 --rc geninfo_unexecuted_blocks=1 00:05:44.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.122 ' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.122 --rc genhtml_branch_coverage=1 00:05:44.122 --rc genhtml_function_coverage=1 00:05:44.122 --rc genhtml_legend=1 00:05:44.122 --rc geninfo_all_blocks=1 00:05:44.122 --rc geninfo_unexecuted_blocks=1 00:05:44.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.122 ' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.122 --rc genhtml_branch_coverage=1 00:05:44.122 --rc genhtml_function_coverage=1 00:05:44.122 --rc genhtml_legend=1 00:05:44.122 --rc geninfo_all_blocks=1 00:05:44.122 --rc geninfo_unexecuted_blocks=1 00:05:44.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.122 ' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.122 --rc genhtml_branch_coverage=1 00:05:44.122 --rc genhtml_function_coverage=1 00:05:44.122 --rc genhtml_legend=1 00:05:44.122 --rc geninfo_all_blocks=1 00:05:44.122 --rc geninfo_unexecuted_blocks=1 00:05:44.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.122 ' 00:05:44.122 19:32:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.122 19:32:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.122 ************************************ 00:05:44.122 START TEST thread_poller_perf 00:05:44.122 ************************************ 00:05:44.122 19:32:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.122 [2024-11-26 19:32:19.348623] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:44.122 [2024-11-26 19:32:19.348704] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116978 ] 00:05:44.122 [2024-11-26 19:32:19.421743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.381 [2024-11-26 19:32:19.461075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.381 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.316 [2024-11-26T18:32:20.624Z] ====================================== 00:05:45.316 [2024-11-26T18:32:20.624Z] busy:2504075718 (cyc) 00:05:45.316 [2024-11-26T18:32:20.624Z] total_run_count: 841000 00:05:45.316 [2024-11-26T18:32:20.624Z] tsc_hz: 2500000000 (cyc) 00:05:45.316 [2024-11-26T18:32:20.624Z] ====================================== 00:05:45.316 [2024-11-26T18:32:20.624Z] poller_cost: 2977 (cyc), 1190 (nsec) 00:05:45.316 00:05:45.316 real 0m1.171s 00:05:45.316 user 0m1.087s 00:05:45.316 sys 0m0.079s 00:05:45.316 19:32:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.316 19:32:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.316 ************************************ 00:05:45.316 END TEST thread_poller_perf 00:05:45.316 ************************************ 00:05:45.316 19:32:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.316 19:32:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:45.316 19:32:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.316 19:32:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.316 ************************************ 00:05:45.316 START TEST thread_poller_perf 00:05:45.316 ************************************ 00:05:45.316 19:32:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.316 [2024-11-26 19:32:20.587576] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:45.316 [2024-11-26 19:32:20.587701] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117258 ] 00:05:45.574 [2024-11-26 19:32:20.661456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.574 [2024-11-26 19:32:20.699898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.574 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.512 [2024-11-26T18:32:21.820Z] ====================================== 00:05:46.512 [2024-11-26T18:32:21.820Z] busy:2501249658 (cyc) 00:05:46.512 [2024-11-26T18:32:21.820Z] total_run_count: 13418000 00:05:46.512 [2024-11-26T18:32:21.820Z] tsc_hz: 2500000000 (cyc) 00:05:46.512 [2024-11-26T18:32:21.820Z] ====================================== 00:05:46.512 [2024-11-26T18:32:21.820Z] poller_cost: 186 (cyc), 74 (nsec) 00:05:46.512 00:05:46.512 real 0m1.165s 00:05:46.512 user 0m1.081s 00:05:46.512 sys 0m0.080s 00:05:46.512 19:32:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.512 19:32:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.512 ************************************ 00:05:46.512 END TEST thread_poller_perf 00:05:46.512 ************************************ 00:05:46.512 19:32:21 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:46.512 19:32:21 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:46.512 19:32:21 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.512 19:32:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.512 19:32:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.512 ************************************ 00:05:46.512 START TEST thread_spdk_lock 00:05:46.512 ************************************ 00:05:46.512 19:32:21 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:46.771 [2024-11-26 19:32:21.829384] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:46.771 [2024-11-26 19:32:21.829475] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117539 ] 00:05:46.771 [2024-11-26 19:32:21.902104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.771 [2024-11-26 19:32:21.946349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.771 [2024-11-26 19:32:21.946352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.340 [2024-11-26 19:32:22.435489] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:47.340 [2024-11-26 19:32:22.435525] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:47.340 [2024-11-26 19:32:22.435535] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbbc0 00:05:47.340 [2024-11-26 19:32:22.436250] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:47.340 [2024-11-26 19:32:22.436353] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:47.340 [2024-11-26 19:32:22.436372] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:47.340 Starting test contend 00:05:47.340 Worker Delay Wait us Hold us Total us 00:05:47.340 0 3 170751 184433 355184 00:05:47.340 1 5 88265 284610 372875 00:05:47.340 PASS test contend 00:05:47.340 Starting test hold_by_poller 00:05:47.340 PASS test hold_by_poller 00:05:47.340 Starting test hold_by_message 00:05:47.340 PASS test hold_by_message 00:05:47.340 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:47.340 100014 assertions passed 00:05:47.340 0 assertions failed 00:05:47.340 00:05:47.340 real 0m0.653s 00:05:47.340 user 0m1.061s 00:05:47.340 sys 0m0.079s 00:05:47.340 19:32:22 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.340 19:32:22 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:47.340 ************************************ 00:05:47.340 END TEST thread_spdk_lock 00:05:47.340 ************************************ 00:05:47.340 00:05:47.340 real 0m3.407s 00:05:47.340 user 0m3.412s 00:05:47.340 sys 0m0.503s 00:05:47.340 19:32:22 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.340 19:32:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.340 ************************************ 00:05:47.340 END TEST thread 00:05:47.340 ************************************ 00:05:47.340 19:32:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:47.340 19:32:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:47.340 19:32:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.340 19:32:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.340 19:32:22 -- common/autotest_common.sh@10 -- # set +x 00:05:47.340 ************************************ 00:05:47.340 START TEST app_cmdline 00:05:47.340 ************************************ 00:05:47.340 19:32:22 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:47.599 * Looking for test storage... 00:05:47.599 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:47.599 19:32:22 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.599 19:32:22 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.599 19:32:22 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.599 19:32:22 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.599 19:32:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.600 19:32:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.600 --rc genhtml_branch_coverage=1 00:05:47.600 --rc genhtml_function_coverage=1 00:05:47.600 --rc genhtml_legend=1 00:05:47.600 --rc geninfo_all_blocks=1 00:05:47.600 --rc geninfo_unexecuted_blocks=1 00:05:47.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.600 ' 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.600 --rc genhtml_branch_coverage=1 00:05:47.600 --rc genhtml_function_coverage=1 00:05:47.600 --rc genhtml_legend=1 00:05:47.600 --rc geninfo_all_blocks=1 00:05:47.600 --rc geninfo_unexecuted_blocks=1 00:05:47.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.600 ' 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.600 --rc genhtml_branch_coverage=1 00:05:47.600 --rc genhtml_function_coverage=1 00:05:47.600 --rc genhtml_legend=1 00:05:47.600 --rc geninfo_all_blocks=1 00:05:47.600 --rc geninfo_unexecuted_blocks=1 00:05:47.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.600 ' 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.600 --rc genhtml_branch_coverage=1 00:05:47.600 --rc genhtml_function_coverage=1 00:05:47.600 --rc genhtml_legend=1 00:05:47.600 --rc geninfo_all_blocks=1 00:05:47.600 --rc geninfo_unexecuted_blocks=1 00:05:47.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.600 ' 00:05:47.600 19:32:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:47.600 19:32:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1117665 00:05:47.600 19:32:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:47.600 19:32:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1117665 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1117665 ']' 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.600 19:32:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.600 [2024-11-26 19:32:22.822623] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:47.600 [2024-11-26 19:32:22.822714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117665 ] 00:05:47.600 [2024-11-26 19:32:22.895259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.859 [2024-11-26 19:32:22.937014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.859 19:32:23 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.859 19:32:23 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:47.859 19:32:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:48.120 { 00:05:48.120 "version": "SPDK v25.01-pre git sha1 f5304d661", 00:05:48.120 "fields": { 00:05:48.120 "major": 25, 00:05:48.120 "minor": 1, 00:05:48.120 "patch": 0, 00:05:48.120 "suffix": "-pre", 00:05:48.120 "commit": "f5304d661" 00:05:48.120 } 00:05:48.120 } 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.120 19:32:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:05:48.120 19:32:23 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.381 request: 00:05:48.381 { 00:05:48.381 "method": "env_dpdk_get_mem_stats", 00:05:48.381 "req_id": 1 00:05:48.381 } 00:05:48.381 Got JSON-RPC error response 00:05:48.381 response: 00:05:48.381 { 00:05:48.381 "code": -32601, 00:05:48.381 "message": "Method not found" 00:05:48.381 } 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.381 19:32:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1117665 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1117665 ']' 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1117665 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117665 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117665' 00:05:48.381 killing process with pid 1117665 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 1117665 00:05:48.381 19:32:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 1117665 00:05:48.640 00:05:48.640 real 0m1.317s 00:05:48.640 user 0m1.471s 00:05:48.640 sys 0m0.506s 00:05:48.640 19:32:23 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.640 19:32:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.640 ************************************ 00:05:48.640 END TEST app_cmdline 00:05:48.640 ************************************ 00:05:48.900 19:32:23 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:48.900 19:32:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.900 19:32:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.900 19:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:48.900 ************************************ 00:05:48.900 START TEST version 00:05:48.900 ************************************ 00:05:48.900 19:32:23 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:48.900 * Looking for test storage... 00:05:48.900 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.900 19:32:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.900 19:32:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.900 19:32:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.900 19:32:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.900 19:32:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.900 19:32:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.900 19:32:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.900 19:32:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.900 19:32:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.900 19:32:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.900 19:32:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.900 19:32:24 version -- scripts/common.sh@344 -- # case "$op" in 00:05:48.900 19:32:24 version -- scripts/common.sh@345 -- # : 1 00:05:48.900 19:32:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.900 19:32:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.900 19:32:24 version -- scripts/common.sh@365 -- # decimal 1 00:05:48.900 19:32:24 version -- scripts/common.sh@353 -- # local d=1 00:05:48.900 19:32:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.900 19:32:24 version -- scripts/common.sh@355 -- # echo 1 00:05:48.900 19:32:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.900 19:32:24 version -- scripts/common.sh@366 -- # decimal 2 00:05:48.900 19:32:24 version -- scripts/common.sh@353 -- # local d=2 00:05:48.900 19:32:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.900 19:32:24 version -- scripts/common.sh@355 -- # echo 2 00:05:48.900 19:32:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.900 19:32:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.900 19:32:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.900 19:32:24 version -- scripts/common.sh@368 -- # return 0 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.900 --rc genhtml_branch_coverage=1 00:05:48.900 --rc genhtml_function_coverage=1 00:05:48.900 --rc genhtml_legend=1 00:05:48.900 --rc geninfo_all_blocks=1 00:05:48.900 --rc geninfo_unexecuted_blocks=1 00:05:48.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.900 ' 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.900 --rc genhtml_branch_coverage=1 00:05:48.900 --rc genhtml_function_coverage=1 00:05:48.900 --rc genhtml_legend=1 00:05:48.900 --rc geninfo_all_blocks=1 00:05:48.900 --rc geninfo_unexecuted_blocks=1 00:05:48.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.900 ' 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.900 --rc genhtml_branch_coverage=1 00:05:48.900 --rc genhtml_function_coverage=1 00:05:48.900 --rc genhtml_legend=1 00:05:48.900 --rc geninfo_all_blocks=1 00:05:48.900 --rc geninfo_unexecuted_blocks=1 00:05:48.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.900 ' 00:05:48.900 19:32:24 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.900 --rc genhtml_branch_coverage=1 00:05:48.900 --rc genhtml_function_coverage=1 00:05:48.900 --rc genhtml_legend=1 00:05:48.900 --rc geninfo_all_blocks=1 00:05:48.900 --rc geninfo_unexecuted_blocks=1 00:05:48.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.900 ' 00:05:48.900 19:32:24 version -- app/version.sh@17 -- # get_header_version major 00:05:48.900 19:32:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:48.900 19:32:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.900 19:32:24 version -- app/version.sh@14 -- # cut -f2 00:05:48.900 19:32:24 version -- app/version.sh@17 -- # major=25 00:05:48.900 19:32:24 version -- app/version.sh@18 -- # get_header_version minor 00:05:48.900 19:32:24 version -- app/version.sh@14 -- # cut -f2 00:05:48.900 19:32:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:48.900 19:32:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.901 19:32:24 version -- app/version.sh@18 -- # minor=1 00:05:48.901 19:32:24 version -- app/version.sh@19 -- # get_header_version patch 00:05:48.901 19:32:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:48.901 19:32:24 version -- app/version.sh@14 -- # cut -f2 00:05:48.901 19:32:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.901 19:32:24 version -- app/version.sh@19 -- # patch=0 00:05:48.901 19:32:24 version -- app/version.sh@20 -- # get_header_version suffix 00:05:48.901 19:32:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:48.901 19:32:24 version -- app/version.sh@14 -- # cut -f2 00:05:48.901 19:32:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.901 19:32:24 version -- app/version.sh@20 -- # suffix=-pre 00:05:48.901 19:32:24 version -- app/version.sh@22 -- # version=25.1 00:05:48.901 19:32:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.901 19:32:24 version -- app/version.sh@28 -- # version=25.1rc0 00:05:48.901 19:32:24 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:48.901 19:32:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:49.160 19:32:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:49.160 19:32:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:49.160 00:05:49.160 real 0m0.251s 00:05:49.160 user 0m0.136s 00:05:49.160 sys 0m0.156s 00:05:49.160 19:32:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.160 19:32:24 version -- common/autotest_common.sh@10 -- # set +x 00:05:49.160 ************************************ 00:05:49.160 END TEST version 00:05:49.160 ************************************ 00:05:49.160 19:32:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@194 -- # uname -s 00:05:49.160 19:32:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:49.160 19:32:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.160 19:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.160 19:32:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:05:49.160 19:32:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:05:49.160 19:32:24 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:49.160 19:32:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.160 19:32:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.160 19:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.160 ************************************ 00:05:49.160 START TEST llvm_fuzz 00:05:49.160 ************************************ 00:05:49.160 19:32:24 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:49.160 * Looking for test storage... 00:05:49.160 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:05:49.160 19:32:24 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.160 19:32:24 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.160 19:32:24 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.420 19:32:24 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.420 --rc genhtml_branch_coverage=1 00:05:49.420 --rc genhtml_function_coverage=1 00:05:49.420 --rc genhtml_legend=1 00:05:49.420 --rc geninfo_all_blocks=1 00:05:49.420 --rc geninfo_unexecuted_blocks=1 00:05:49.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.420 ' 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.420 --rc genhtml_branch_coverage=1 00:05:49.420 --rc genhtml_function_coverage=1 00:05:49.420 --rc genhtml_legend=1 00:05:49.420 --rc geninfo_all_blocks=1 00:05:49.420 --rc geninfo_unexecuted_blocks=1 00:05:49.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.420 ' 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.420 --rc genhtml_branch_coverage=1 00:05:49.420 --rc genhtml_function_coverage=1 00:05:49.420 --rc genhtml_legend=1 00:05:49.420 --rc geninfo_all_blocks=1 00:05:49.420 --rc geninfo_unexecuted_blocks=1 00:05:49.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.420 ' 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.420 --rc genhtml_branch_coverage=1 00:05:49.420 --rc genhtml_function_coverage=1 00:05:49.420 --rc genhtml_legend=1 00:05:49.420 --rc geninfo_all_blocks=1 00:05:49.420 --rc geninfo_unexecuted_blocks=1 00:05:49.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.420 ' 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:49.420 19:32:24 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.420 19:32:24 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:05:49.420 ************************************ 00:05:49.420 START TEST nvmf_llvm_fuzz 00:05:49.420 ************************************ 00:05:49.420 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:49.420 * Looking for test storage... 00:05:49.420 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.420 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.420 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.420 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.684 --rc genhtml_branch_coverage=1 00:05:49.684 --rc genhtml_function_coverage=1 00:05:49.684 --rc genhtml_legend=1 00:05:49.684 --rc geninfo_all_blocks=1 00:05:49.684 --rc geninfo_unexecuted_blocks=1 00:05:49.684 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.684 ' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.684 --rc genhtml_branch_coverage=1 00:05:49.684 --rc genhtml_function_coverage=1 00:05:49.684 --rc genhtml_legend=1 00:05:49.684 --rc geninfo_all_blocks=1 00:05:49.684 --rc geninfo_unexecuted_blocks=1 00:05:49.684 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.684 ' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.684 --rc genhtml_branch_coverage=1 00:05:49.684 --rc genhtml_function_coverage=1 00:05:49.684 --rc genhtml_legend=1 00:05:49.684 --rc geninfo_all_blocks=1 00:05:49.684 --rc geninfo_unexecuted_blocks=1 00:05:49.684 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.684 ' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.684 --rc genhtml_branch_coverage=1 00:05:49.684 --rc genhtml_function_coverage=1 00:05:49.684 --rc genhtml_legend=1 00:05:49.684 --rc geninfo_all_blocks=1 00:05:49.684 --rc geninfo_unexecuted_blocks=1 00:05:49.684 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.684 ' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:49.684 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:05:49.685 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:49.685 #define SPDK_CONFIG_H 00:05:49.685 #define SPDK_CONFIG_AIO_FSDEV 1 00:05:49.685 #define SPDK_CONFIG_APPS 1 00:05:49.685 #define SPDK_CONFIG_ARCH native 00:05:49.685 #undef SPDK_CONFIG_ASAN 00:05:49.685 #undef SPDK_CONFIG_AVAHI 00:05:49.685 #undef SPDK_CONFIG_CET 00:05:49.685 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:05:49.685 #define SPDK_CONFIG_COVERAGE 1 00:05:49.685 #define SPDK_CONFIG_CROSS_PREFIX 00:05:49.685 #undef SPDK_CONFIG_CRYPTO 00:05:49.685 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:49.685 #undef SPDK_CONFIG_CUSTOMOCF 00:05:49.685 #undef SPDK_CONFIG_DAOS 00:05:49.685 #define SPDK_CONFIG_DAOS_DIR 00:05:49.685 #define SPDK_CONFIG_DEBUG 1 00:05:49.685 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:49.685 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:49.685 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:49.685 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:49.685 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:49.685 #undef SPDK_CONFIG_DPDK_UADK 00:05:49.685 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:49.685 #define SPDK_CONFIG_EXAMPLES 1 00:05:49.685 #undef SPDK_CONFIG_FC 00:05:49.685 #define SPDK_CONFIG_FC_PATH 00:05:49.685 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:49.685 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:49.685 #define SPDK_CONFIG_FSDEV 1 00:05:49.685 #undef SPDK_CONFIG_FUSE 00:05:49.685 #define SPDK_CONFIG_FUZZER 1 00:05:49.685 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:49.685 #undef SPDK_CONFIG_GOLANG 00:05:49.685 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:49.685 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:49.685 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:49.685 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:49.685 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:49.685 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:49.685 #undef SPDK_CONFIG_HAVE_LZ4 00:05:49.685 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:05:49.685 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:05:49.685 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:49.685 #define SPDK_CONFIG_IDXD 1 00:05:49.685 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:49.685 #undef SPDK_CONFIG_IPSEC_MB 00:05:49.685 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:49.685 #define SPDK_CONFIG_ISAL 1 00:05:49.685 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:49.685 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:49.685 #define SPDK_CONFIG_LIBDIR 00:05:49.686 #undef SPDK_CONFIG_LTO 00:05:49.686 #define SPDK_CONFIG_MAX_LCORES 128 00:05:49.686 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:05:49.686 #define SPDK_CONFIG_NVME_CUSE 1 00:05:49.686 #undef SPDK_CONFIG_OCF 00:05:49.686 #define SPDK_CONFIG_OCF_PATH 00:05:49.686 #define SPDK_CONFIG_OPENSSL_PATH 00:05:49.686 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:49.686 #define SPDK_CONFIG_PGO_DIR 00:05:49.686 #undef SPDK_CONFIG_PGO_USE 00:05:49.686 #define SPDK_CONFIG_PREFIX /usr/local 00:05:49.686 #undef SPDK_CONFIG_RAID5F 00:05:49.686 #undef SPDK_CONFIG_RBD 00:05:49.686 #define SPDK_CONFIG_RDMA 1 00:05:49.686 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:49.686 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:49.686 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:49.686 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:49.686 #undef SPDK_CONFIG_SHARED 00:05:49.686 #undef SPDK_CONFIG_SMA 00:05:49.686 #define SPDK_CONFIG_TESTS 1 00:05:49.686 #undef SPDK_CONFIG_TSAN 00:05:49.686 #define SPDK_CONFIG_UBLK 1 00:05:49.686 #define SPDK_CONFIG_UBSAN 1 00:05:49.686 #undef SPDK_CONFIG_UNIT_TESTS 00:05:49.686 #undef SPDK_CONFIG_URING 00:05:49.686 #define SPDK_CONFIG_URING_PATH 00:05:49.686 #undef SPDK_CONFIG_URING_ZNS 00:05:49.686 #undef SPDK_CONFIG_USDT 00:05:49.686 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:49.686 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:49.686 #define SPDK_CONFIG_VFIO_USER 1 00:05:49.686 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:49.686 #define SPDK_CONFIG_VHOST 1 00:05:49.686 #define SPDK_CONFIG_VIRTIO 1 00:05:49.686 #undef SPDK_CONFIG_VTUNE 00:05:49.686 #define SPDK_CONFIG_VTUNE_DIR 00:05:49.686 #define SPDK_CONFIG_WERROR 1 00:05:49.686 #define SPDK_CONFIG_WPDK_DIR 00:05:49.686 #undef SPDK_CONFIG_XNVME 00:05:49.686 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:49.686 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:05:49.687 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1118288 ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1118288 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.a6b9RL 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.a6b9RL/tests/nvmf /tmp/spdk.a6b9RL 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=52913782784 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8816824320 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:05:49.688 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863781888 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1523712 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:05:49.689 * Looking for test storage... 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=52913782784 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=11031416832 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.689 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.689 19:32:24 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.949 --rc genhtml_branch_coverage=1 00:05:49.949 --rc genhtml_function_coverage=1 00:05:49.949 --rc genhtml_legend=1 00:05:49.949 --rc geninfo_all_blocks=1 00:05:49.949 --rc geninfo_unexecuted_blocks=1 00:05:49.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.949 ' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.949 --rc genhtml_branch_coverage=1 00:05:49.949 --rc genhtml_function_coverage=1 00:05:49.949 --rc genhtml_legend=1 00:05:49.949 --rc geninfo_all_blocks=1 00:05:49.949 --rc geninfo_unexecuted_blocks=1 00:05:49.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.949 ' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.949 --rc genhtml_branch_coverage=1 00:05:49.949 --rc genhtml_function_coverage=1 00:05:49.949 --rc genhtml_legend=1 00:05:49.949 --rc geninfo_all_blocks=1 00:05:49.949 --rc geninfo_unexecuted_blocks=1 00:05:49.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.949 ' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.949 --rc genhtml_branch_coverage=1 00:05:49.949 --rc genhtml_function_coverage=1 00:05:49.949 --rc genhtml_legend=1 00:05:49.949 --rc geninfo_all_blocks=1 00:05:49.949 --rc geninfo_unexecuted_blocks=1 00:05:49.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.949 ' 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:49.949 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:49.950 19:32:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:05:49.950 [2024-11-26 19:32:25.104137] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:49.950 [2024-11-26 19:32:25.104218] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118370 ] 00:05:50.209 [2024-11-26 19:32:25.375492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.209 [2024-11-26 19:32:25.432390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.209 [2024-11-26 19:32:25.491428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.209 [2024-11-26 19:32:25.507809] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:05:50.469 INFO: Running with entropic power schedule (0xFF, 100). 00:05:50.469 INFO: Seed: 3093996927 00:05:50.469 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:05:50.469 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:05:50.469 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:50.469 INFO: A corpus is not provided, starting from an empty corpus 00:05:50.469 #2 INITED exec/s: 0 rss: 65Mb 00:05:50.469 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:50.469 This may also happen if the target rejected all inputs we tried so far 00:05:50.469 [2024-11-26 19:32:25.553270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.469 [2024-11-26 19:32:25.553299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.469 [2024-11-26 19:32:25.553355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.469 [2024-11-26 19:32:25.553369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.728 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:05:50.728 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:50.728 #16 NEW cov: 12188 ft: 12204 corp: 2/149b lim: 320 exec/s: 0 rss: 73Mb L: 148/148 MS: 4 ShuffleBytes-CMP-InsertRepeatedBytes-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000p"- 00:05:50.728 [2024-11-26 19:32:25.874130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.728 [2024-11-26 19:32:25.874164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.728 [2024-11-26 19:32:25.874220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.728 [2024-11-26 19:32:25.874235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.728 #19 NEW cov: 12318 ft: 12806 corp: 3/298b lim: 320 exec/s: 0 rss: 73Mb L: 149/149 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:05:50.729 [2024-11-26 19:32:25.914220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:25.914251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.729 [2024-11-26 19:32:25.914308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:25.914322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.729 [2024-11-26 19:32:25.914376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:25.914390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:50.729 #20 NEW cov: 12324 ft: 13293 corp: 4/517b lim: 320 exec/s: 0 rss: 73Mb L: 219/219 MS: 1 InsertRepeatedBytes- 00:05:50.729 [2024-11-26 19:32:25.974277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:25.974305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.729 [2024-11-26 19:32:25.974362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:25.974376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.729 #27 NEW cov: 12412 ft: 13518 corp: 5/667b lim: 320 exec/s: 0 rss: 73Mb L: 150/219 MS: 2 InsertByte-CrossOver- 00:05:50.729 [2024-11-26 19:32:26.014410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:26.014437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.729 [2024-11-26 19:32:26.014496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.729 [2024-11-26 19:32:26.014511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.729 #28 NEW cov: 12412 ft: 13631 corp: 6/817b lim: 320 exec/s: 0 rss: 73Mb L: 150/219 MS: 1 InsertByte- 00:05:50.988 [2024-11-26 19:32:26.054510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.054538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.988 [2024-11-26 19:32:26.054592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.054611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.988 #29 NEW cov: 12412 ft: 13757 corp: 7/967b lim: 320 exec/s: 0 rss: 73Mb L: 150/219 MS: 1 ChangeBit- 00:05:50.988 [2024-11-26 19:32:26.114705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.114732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.988 [2024-11-26 19:32:26.114787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.114801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.988 #30 NEW cov: 12412 ft: 13799 corp: 8/1117b lim: 320 exec/s: 0 rss: 73Mb L: 150/219 MS: 1 ChangeBit- 00:05:50.988 [2024-11-26 19:32:26.154817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.154843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.988 [2024-11-26 19:32:26.154907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.154921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.988 #31 NEW cov: 12412 ft: 13877 corp: 9/1268b lim: 320 exec/s: 0 rss: 73Mb L: 151/219 MS: 1 InsertByte- 00:05:50.988 [2024-11-26 19:32:26.215106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.215132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.988 [2024-11-26 19:32:26.215190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.215203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.988 [2024-11-26 19:32:26.215261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:50.988 [2024-11-26 19:32:26.215275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:50.988 #32 NEW cov: 12412 ft: 13899 corp: 10/1478b lim: 320 exec/s: 0 rss: 73Mb L: 210/219 MS: 1 InsertRepeatedBytes- 00:05:50.988 [2024-11-26 19:32:26.254955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.988 [2024-11-26 19:32:26.254981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:50.988 #33 NEW cov: 12412 ft: 14160 corp: 11/1593b lim: 320 exec/s: 0 rss: 73Mb L: 115/219 MS: 1 EraseBytes- 00:05:51.247 [2024-11-26 19:32:26.315239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.315266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.247 [2024-11-26 19:32:26.315323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.315337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.247 #34 NEW cov: 12412 ft: 14171 corp: 12/1751b lim: 320 exec/s: 0 rss: 73Mb L: 158/219 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000p"- 00:05:51.247 [2024-11-26 19:32:26.355486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.355512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.247 [2024-11-26 19:32:26.355570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.355584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.247 [2024-11-26 19:32:26.355640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:51.247 [2024-11-26 19:32:26.355654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:51.247 #35 NEW cov: 12412 ft: 14292 corp: 13/1962b lim: 320 exec/s: 0 rss: 73Mb L: 211/219 MS: 1 InsertByte- 00:05:51.247 [2024-11-26 19:32:26.415509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.415535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.247 [2024-11-26 19:32:26.415592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.415615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.247 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:51.247 #36 NEW cov: 12435 ft: 14305 corp: 14/2096b lim: 320 exec/s: 0 rss: 74Mb L: 134/219 MS: 1 EraseBytes- 00:05:51.247 [2024-11-26 19:32:26.455509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.455536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.247 #37 NEW cov: 12435 ft: 14368 corp: 15/2217b lim: 320 exec/s: 0 rss: 74Mb L: 121/219 MS: 1 EraseBytes- 00:05:51.247 [2024-11-26 19:32:26.495748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.247 [2024-11-26 19:32:26.495774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.247 [2024-11-26 19:32:26.495830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.248 [2024-11-26 19:32:26.495844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.248 #38 NEW cov: 12435 ft: 14369 corp: 16/2366b lim: 320 exec/s: 0 rss: 74Mb L: 149/219 MS: 1 ChangeByte- 00:05:51.248 [2024-11-26 19:32:26.535863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.248 [2024-11-26 19:32:26.535888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.248 [2024-11-26 19:32:26.535947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.248 [2024-11-26 19:32:26.535961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.507 #39 NEW cov: 12435 ft: 14417 corp: 17/2517b lim: 320 exec/s: 39 rss: 74Mb L: 151/219 MS: 1 InsertByte- 00:05:51.507 [2024-11-26 19:32:26.576103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.576129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.507 [2024-11-26 19:32:26.576185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.576199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.507 [2024-11-26 19:32:26.576256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.576270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:51.507 #40 NEW cov: 12435 ft: 14476 corp: 18/2736b lim: 320 exec/s: 40 rss: 74Mb L: 219/219 MS: 1 ChangeBit- 00:05:51.507 [2024-11-26 19:32:26.615943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.615968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.507 #41 NEW cov: 12435 ft: 14508 corp: 19/2857b lim: 320 exec/s: 41 rss: 74Mb L: 121/219 MS: 1 CMP- DE: "\352\032X\"\000\000\000\000"- 00:05:51.507 [2024-11-26 19:32:26.676352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.676379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.507 [2024-11-26 19:32:26.676435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.676453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.507 [2024-11-26 19:32:26.676511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:51.507 [2024-11-26 19:32:26.676525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:51.507 #42 NEW cov: 12435 ft: 14522 corp: 20/3067b lim: 320 exec/s: 42 rss: 74Mb L: 210/219 MS: 1 CopyPart- 00:05:51.507 [2024-11-26 19:32:26.716375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.507 [2024-11-26 19:32:26.716401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.507 [2024-11-26 19:32:26.716460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000098 00:05:51.507 [2024-11-26 19:32:26.716474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.507 #43 NEW cov: 12435 ft: 14571 corp: 21/3202b lim: 320 exec/s: 43 rss: 74Mb L: 135/219 MS: 1 InsertByte- 00:05:51.507 [2024-11-26 19:32:26.776448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:05:51.507 [2024-11-26 19:32:26.776475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.507 NEW_FUNC[1/1]: 0x1974c68 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:05:51.507 #45 NEW cov: 12473 ft: 14664 corp: 22/3278b lim: 320 exec/s: 45 rss: 74Mb L: 76/219 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:05:51.767 [2024-11-26 19:32:26.816668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.816694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.767 [2024-11-26 19:32:26.816751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.816765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.767 #46 NEW cov: 12473 ft: 14725 corp: 23/3426b lim: 320 exec/s: 46 rss: 74Mb L: 148/219 MS: 1 CopyPart- 00:05:51.767 [2024-11-26 19:32:26.876719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.876744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.767 #47 NEW cov: 12473 ft: 14737 corp: 24/3506b lim: 320 exec/s: 47 rss: 74Mb L: 80/219 MS: 1 CrossOver- 00:05:51.767 [2024-11-26 19:32:26.936941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.936967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.767 [2024-11-26 19:32:26.937023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.937037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.767 #48 NEW cov: 12473 ft: 14757 corp: 25/3657b lim: 320 exec/s: 48 rss: 74Mb L: 151/219 MS: 1 ChangeBinInt- 00:05:51.767 [2024-11-26 19:32:26.997231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.997256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.767 [2024-11-26 19:32:26.997314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:26.997328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.767 #49 NEW cov: 12473 ft: 14763 corp: 26/3806b lim: 320 exec/s: 49 rss: 74Mb L: 149/219 MS: 1 ShuffleBytes- 00:05:51.767 [2024-11-26 19:32:27.037315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:27.037340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.767 [2024-11-26 19:32:27.037396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:51.767 [2024-11-26 19:32:27.037410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.027 #50 NEW cov: 12473 ft: 14780 corp: 27/3992b lim: 320 exec/s: 50 rss: 74Mb L: 186/219 MS: 1 CopyPart- 00:05:52.027 [2024-11-26 19:32:27.097625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.097651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.097709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.097723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.097778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:00ffffff 00:05:52.027 [2024-11-26 19:32:27.097793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.027 #51 NEW cov: 12480 ft: 14798 corp: 28/4186b lim: 320 exec/s: 51 rss: 74Mb L: 194/219 MS: 1 InsertRepeatedBytes- 00:05:52.027 [2024-11-26 19:32:27.137433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.137459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.027 #52 NEW cov: 12480 ft: 14828 corp: 29/4308b lim: 320 exec/s: 52 rss: 74Mb L: 122/219 MS: 1 InsertByte- 00:05:52.027 [2024-11-26 19:32:27.177876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.177902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.177959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.177973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.178030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:52.027 [2024-11-26 19:32:27.178045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.027 #53 NEW cov: 12480 ft: 14845 corp: 30/4518b lim: 320 exec/s: 53 rss: 74Mb L: 210/219 MS: 1 ChangeByte- 00:05:52.027 [2024-11-26 19:32:27.237881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.237907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.237963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.237979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.027 #54 NEW cov: 12480 ft: 14850 corp: 31/4666b lim: 320 exec/s: 54 rss: 74Mb L: 148/219 MS: 1 ChangeBinInt- 00:05:52.027 [2024-11-26 19:32:27.278104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.278130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.278187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.027 [2024-11-26 19:32:27.278200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.027 [2024-11-26 19:32:27.278257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:52.027 [2024-11-26 19:32:27.278271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.027 #55 NEW cov: 12480 ft: 14854 corp: 32/4877b lim: 320 exec/s: 55 rss: 74Mb L: 211/219 MS: 1 CMP- DE: "?\001\000\000"- 00:05:52.287 [2024-11-26 19:32:27.338233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.338261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.287 [2024-11-26 19:32:27.338321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.338335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.287 #56 NEW cov: 12480 ft: 14887 corp: 33/5026b lim: 320 exec/s: 56 rss: 75Mb L: 149/219 MS: 1 CopyPart- 00:05:52.287 [2024-11-26 19:32:27.378309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.378335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.287 [2024-11-26 19:32:27.378393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.378407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.287 #57 NEW cov: 12480 ft: 14888 corp: 34/5182b lim: 320 exec/s: 57 rss: 75Mb L: 156/219 MS: 1 InsertRepeatedBytes- 00:05:52.287 [2024-11-26 19:32:27.438447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.438473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.287 [2024-11-26 19:32:27.438534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:98000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.438549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.287 #58 NEW cov: 12480 ft: 14928 corp: 35/5316b lim: 320 exec/s: 58 rss: 75Mb L: 134/219 MS: 1 EraseBytes- 00:05:52.287 [2024-11-26 19:32:27.478541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.478568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.287 [2024-11-26 19:32:27.478626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.287 [2024-11-26 19:32:27.478640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.287 #59 NEW cov: 12480 ft: 14947 corp: 36/5450b lim: 320 exec/s: 59 rss: 75Mb L: 134/219 MS: 1 CopyPart- 00:05:52.287 [2024-11-26 19:32:27.518915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.288 [2024-11-26 19:32:27.518943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.288 [2024-11-26 19:32:27.518999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:52.288 [2024-11-26 19:32:27.519013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.288 [2024-11-26 19:32:27.519068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c 00:05:52.288 [2024-11-26 19:32:27.519083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.288 [2024-11-26 19:32:27.519139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3c) qid:0 cid:7 nsid:3c3c3c3c cdw10:00000000 cdw11:00000000 00:05:52.288 [2024-11-26 19:32:27.519153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:52.288 #60 NEW cov: 12480 ft: 15160 corp: 37/5749b lim: 320 exec/s: 30 rss: 75Mb L: 299/299 MS: 1 InsertRepeatedBytes- 00:05:52.288 #60 DONE cov: 12480 ft: 15160 corp: 37/5749b lim: 320 exec/s: 30 rss: 75Mb 00:05:52.288 ###### Recommended dictionary. ###### 00:05:52.288 "\000\000\000\000\000\000\000p" # Uses: 1 00:05:52.288 "\352\032X\"\000\000\000\000" # Uses: 0 00:05:52.288 "?\001\000\000" # Uses: 0 00:05:52.288 ###### End of recommended dictionary. ###### 00:05:52.288 Done 60 runs in 2 second(s) 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:52.547 19:32:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:05:52.547 [2024-11-26 19:32:27.715408] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:52.548 [2024-11-26 19:32:27.715477] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118840 ] 00:05:52.807 [2024-11-26 19:32:27.982268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.807 [2024-11-26 19:32:28.031933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.807 [2024-11-26 19:32:28.090884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.807 [2024-11-26 19:32:28.107194] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:05:53.066 INFO: Running with entropic power schedule (0xFF, 100). 00:05:53.066 INFO: Seed: 1400017815 00:05:53.066 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:05:53.066 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:05:53.066 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:53.066 INFO: A corpus is not provided, starting from an empty corpus 00:05:53.066 #2 INITED exec/s: 0 rss: 65Mb 00:05:53.066 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:53.066 This may also happen if the target rejected all inputs we tried so far 00:05:53.066 [2024-11-26 19:32:28.162391] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:53.066 [2024-11-26 19:32:28.162609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.066 [2024-11-26 19:32:28.162637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.326 NEW_FUNC[1/717]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:05:53.326 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:53.326 #3 NEW cov: 12304 ft: 12291 corp: 2/10b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\221\325\324\242j\006\350"- 00:05:53.326 [2024-11-26 19:32:28.483156] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x91d5 00:05:53.326 [2024-11-26 19:32:28.483376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a00ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.483406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.326 #7 NEW cov: 12417 ft: 12948 corp: 3/21b lim: 30 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 CopyPart-ShuffleBytes-InsertByte-PersAutoDict- DE: "\000\221\325\324\242j\006\350"- 00:05:53.326 [2024-11-26 19:32:28.523310] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:53.326 [2024-11-26 19:32:28.523432] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:53.326 [2024-11-26 19:32:28.523540] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:53.326 [2024-11-26 19:32:28.523653] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:53.326 [2024-11-26 19:32:28.523858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:64ff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.523885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.326 [2024-11-26 19:32:28.523940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.523955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.326 [2024-11-26 19:32:28.524011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.524028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.326 [2024-11-26 19:32:28.524080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.524094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:53.326 #12 NEW cov: 12423 ft: 13772 corp: 4/47b lim: 30 exec/s: 0 rss: 73Mb L: 26/26 MS: 5 ShuffleBytes-CMP-CMP-ChangeByte-InsertRepeatedBytes- DE: "\377\377\377\377"-"\377\366"- 00:05:53.326 [2024-11-26 19:32:28.563381] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.326 [2024-11-26 19:32:28.563495] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.326 [2024-11-26 19:32:28.563608] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.326 [2024-11-26 19:32:28.563819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4040835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.563846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.326 [2024-11-26 19:32:28.563903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.563918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.326 [2024-11-26 19:32:28.563972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.563987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.326 #16 NEW cov: 12508 ft: 14235 corp: 5/65b lim: 30 exec/s: 0 rss: 73Mb L: 18/26 MS: 4 InsertByte-ChangeByte-CopyPart-InsertRepeatedBytes- 00:05:53.326 [2024-11-26 19:32:28.603404] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:53.326 [2024-11-26 19:32:28.603615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.326 [2024-11-26 19:32:28.603641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.586 #17 NEW cov: 12508 ft: 14379 corp: 6/74b lim: 30 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ChangeByte- 00:05:53.586 [2024-11-26 19:32:28.663631] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.586 [2024-11-26 19:32:28.663746] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.586 [2024-11-26 19:32:28.663949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.663976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.586 [2024-11-26 19:32:28.664031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.664047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.586 #18 NEW cov: 12508 ft: 14677 corp: 7/87b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 EraseBytes- 00:05:53.586 [2024-11-26 19:32:28.723751] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.586 [2024-11-26 19:32:28.723866] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.586 [2024-11-26 19:32:28.724077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.724102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.586 [2024-11-26 19:32:28.724157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.724171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.586 #19 NEW cov: 12508 ft: 14722 corp: 8/100b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 ShuffleBytes- 00:05:53.586 [2024-11-26 19:32:28.783926] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.586 [2024-11-26 19:32:28.784041] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:53.586 [2024-11-26 19:32:28.784239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.784279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.586 [2024-11-26 19:32:28.784336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.784351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.586 #20 NEW cov: 12508 ft: 14733 corp: 9/113b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:05:53.586 [2024-11-26 19:32:28.844068] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:05:53.586 [2024-11-26 19:32:28.844277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000291 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.844301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.586 #21 NEW cov: 12531 ft: 14821 corp: 10/123b lim: 30 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 InsertByte- 00:05:53.586 [2024-11-26 19:32:28.884171] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:53.586 [2024-11-26 19:32:28.884376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.586 [2024-11-26 19:32:28.884402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.846 #22 NEW cov: 12531 ft: 14921 corp: 11/133b lim: 30 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 InsertByte- 00:05:53.846 [2024-11-26 19:32:28.924312] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.846 [2024-11-26 19:32:28.924430] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.846 [2024-11-26 19:32:28.924636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:404b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.846 [2024-11-26 19:32:28.924662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.846 [2024-11-26 19:32:28.924717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.846 [2024-11-26 19:32:28.924732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.846 #23 NEW cov: 12531 ft: 14996 corp: 12/146b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 ChangeBit- 00:05:53.846 [2024-11-26 19:32:28.964393] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002b5d 00:05:53.846 [2024-11-26 19:32:28.964608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a09026f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.846 [2024-11-26 19:32:28.964633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.846 #24 NEW cov: 12531 ft: 15021 corp: 13/155b lim: 30 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ChangeBinInt- 00:05:53.846 [2024-11-26 19:32:29.004561] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.847 [2024-11-26 19:32:29.004683] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.847 [2024-11-26 19:32:29.004888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.004914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.847 [2024-11-26 19:32:29.004969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:075b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.004984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.847 #25 NEW cov: 12531 ft: 15055 corp: 14/168b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 ChangeByte- 00:05:53.847 [2024-11-26 19:32:29.044698] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:53.847 [2024-11-26 19:32:29.044810] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (879984) > buf size (4096) 00:05:53.847 [2024-11-26 19:32:29.045016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.045042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.847 [2024-11-26 19:32:29.045114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.045129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.847 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:53.847 #26 NEW cov: 12554 ft: 15089 corp: 15/181b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 ChangeByte- 00:05:53.847 [2024-11-26 19:32:29.084756] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:53.847 [2024-11-26 19:32:29.084966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.084990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.847 #27 NEW cov: 12554 ft: 15102 corp: 16/191b lim: 30 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 PersAutoDict- DE: "\000\221\325\324\242j\006\350"- 00:05:53.847 [2024-11-26 19:32:29.144988] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (16388) > buf size (4096) 00:05:53.847 [2024-11-26 19:32:29.145106] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x91d5 00:05:53.847 [2024-11-26 19:32:29.145210] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (742028) > buf size (4096) 00:05:53.847 [2024-11-26 19:32:29.145410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:10000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.145435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.847 [2024-11-26 19:32:29.145488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.145506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.847 [2024-11-26 19:32:29.145559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d4a2026a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:53.847 [2024-11-26 19:32:29.145573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.105 #28 NEW cov: 12554 ft: 15140 corp: 17/209b lim: 30 exec/s: 28 rss: 74Mb L: 18/26 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:05:54.105 [2024-11-26 19:32:29.205112] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a09 00:05:54.105 [2024-11-26 19:32:29.205227] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000095f9 00:05:54.105 [2024-11-26 19:32:29.205420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a2b815d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.205445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.105 [2024-11-26 19:32:29.205503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f2a812b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.205517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.105 #29 NEW cov: 12554 ft: 15177 corp: 18/222b lim: 30 exec/s: 29 rss: 74Mb L: 13/26 MS: 1 CopyPart- 00:05:54.105 [2024-11-26 19:32:29.265272] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a09 00:05:54.105 [2024-11-26 19:32:29.265388] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000955d 00:05:54.105 [2024-11-26 19:32:29.265589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a2b815d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.265619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.105 [2024-11-26 19:32:29.265675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f2a812b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.265690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.105 #30 NEW cov: 12554 ft: 15227 corp: 19/235b lim: 30 exec/s: 30 rss: 74Mb L: 13/26 MS: 1 CopyPart- 00:05:54.105 [2024-11-26 19:32:29.325417] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a09 00:05:54.105 [2024-11-26 19:32:29.325533] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:05:54.105 [2024-11-26 19:32:29.325768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a2b815d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.325793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.105 [2024-11-26 19:32:29.325850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f2a812b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.325865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.105 #31 NEW cov: 12554 ft: 15247 corp: 20/251b lim: 30 exec/s: 31 rss: 74Mb L: 16/26 MS: 1 CopyPart- 00:05:54.105 [2024-11-26 19:32:29.385586] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.105 [2024-11-26 19:32:29.385709] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:54.105 [2024-11-26 19:32:29.385907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.385936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.105 [2024-11-26 19:32:29.385993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.105 [2024-11-26 19:32:29.386008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.364 #32 NEW cov: 12554 ft: 15270 corp: 21/264b lim: 30 exec/s: 32 rss: 74Mb L: 13/26 MS: 1 CopyPart- 00:05:54.364 [2024-11-26 19:32:29.445739] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.364 [2024-11-26 19:32:29.445855] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.364 [2024-11-26 19:32:29.446063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.364 [2024-11-26 19:32:29.446090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.364 [2024-11-26 19:32:29.446147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.364 [2024-11-26 19:32:29.446162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.364 #33 NEW cov: 12554 ft: 15298 corp: 22/277b lim: 30 exec/s: 33 rss: 74Mb L: 13/26 MS: 1 ShuffleBytes- 00:05:54.364 [2024-11-26 19:32:29.485913] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.364 [2024-11-26 19:32:29.486023] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002b5d 00:05:54.364 [2024-11-26 19:32:29.486130] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:54.364 [2024-11-26 19:32:29.486334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.364 [2024-11-26 19:32:29.486361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.364 [2024-11-26 19:32:29.486417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b09026f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.486432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.486489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95f9835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.486503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.365 #34 NEW cov: 12554 ft: 15322 corp: 23/297b lim: 30 exec/s: 34 rss: 74Mb L: 20/26 MS: 1 CrossOver- 00:05:54.365 [2024-11-26 19:32:29.526062] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.526176] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.526280] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.526385] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.526590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:64ff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.526621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.526679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.526697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.526753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d981d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.526767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.526822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.526837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.365 #35 NEW cov: 12554 ft: 15329 corp: 24/323b lim: 30 exec/s: 35 rss: 74Mb L: 26/26 MS: 1 ChangeBit- 00:05:54.365 [2024-11-26 19:32:29.586154] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:54.365 [2024-11-26 19:32:29.586268] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x91d5 00:05:54.365 [2024-11-26 19:32:29.586468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.586509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.586565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6a0600e8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.586581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.365 #36 NEW cov: 12554 ft: 15354 corp: 25/340b lim: 30 exec/s: 36 rss: 74Mb L: 17/26 MS: 1 PersAutoDict- DE: "\000\221\325\324\242j\006\350"- 00:05:54.365 [2024-11-26 19:32:29.646348] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.646462] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.646563] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:05:54.365 [2024-11-26 19:32:29.646673] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d1d1 00:05:54.365 [2024-11-26 19:32:29.646873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:64ff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.646900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.646956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.646972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.647023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d981d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.647038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.365 [2024-11-26 19:32:29.647091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d102d1 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.365 [2024-11-26 19:32:29.647105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.624 #37 NEW cov: 12554 ft: 15391 corp: 26/366b lim: 30 exec/s: 37 rss: 74Mb L: 26/26 MS: 1 ChangeBinInt- 00:05:54.624 [2024-11-26 19:32:29.706435] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd5a2 00:05:54.624 [2024-11-26 19:32:29.706644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000006 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.706672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.624 #38 NEW cov: 12554 ft: 15418 corp: 27/376b lim: 30 exec/s: 38 rss: 74Mb L: 10/26 MS: 1 ShuffleBytes- 00:05:54.624 [2024-11-26 19:32:29.746623] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.624 [2024-11-26 19:32:29.746739] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.624 [2024-11-26 19:32:29.746845] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.624 [2024-11-26 19:32:29.747045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:404b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.747070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.624 [2024-11-26 19:32:29.747127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.747141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.624 [2024-11-26 19:32:29.747194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.747207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.624 #39 NEW cov: 12554 ft: 15424 corp: 28/395b lim: 30 exec/s: 39 rss: 74Mb L: 19/26 MS: 1 CopyPart- 00:05:54.624 [2024-11-26 19:32:29.806768] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.624 [2024-11-26 19:32:29.806884] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:54.624 [2024-11-26 19:32:29.806989] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:05:54.624 [2024-11-26 19:32:29.807203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.807230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.624 [2024-11-26 19:32:29.807285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b098191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.624 [2024-11-26 19:32:29.807300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.624 [2024-11-26 19:32:29.807354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6a0600a4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.625 [2024-11-26 19:32:29.807368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.625 #40 NEW cov: 12554 ft: 15441 corp: 29/415b lim: 30 exec/s: 40 rss: 75Mb L: 20/26 MS: 1 ChangeBinInt- 00:05:54.625 [2024-11-26 19:32:29.866924] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000a26a 00:05:54.625 [2024-11-26 19:32:29.867132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.625 [2024-11-26 19:32:29.867156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.625 #41 NEW cov: 12554 ft: 15465 corp: 30/425b lim: 30 exec/s: 41 rss: 75Mb L: 10/26 MS: 1 CopyPart- 00:05:54.625 [2024-11-26 19:32:29.907005] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d4a2 00:05:54.625 [2024-11-26 19:32:29.907212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.625 [2024-11-26 19:32:29.907242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.625 #42 NEW cov: 12554 ft: 15470 corp: 31/434b lim: 30 exec/s: 42 rss: 75Mb L: 9/26 MS: 1 ChangeBit- 00:05:54.884 [2024-11-26 19:32:29.947135] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000404b 00:05:54.884 [2024-11-26 19:32:29.947251] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:29.947472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.884 [2024-11-26 19:32:29.947498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.884 [2024-11-26 19:32:29.947554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.884 [2024-11-26 19:32:29.947569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.884 #48 NEW cov: 12554 ft: 15492 corp: 32/447b lim: 30 exec/s: 48 rss: 75Mb L: 13/26 MS: 1 ShuffleBytes- 00:05:54.884 [2024-11-26 19:32:29.987346] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:29.987467] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:29.987572] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:29.987799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:404083ee cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.884 [2024-11-26 19:32:29.987825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.884 [2024-11-26 19:32:29.987881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.884 [2024-11-26 19:32:29.987896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.884 [2024-11-26 19:32:29.987951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.884 [2024-11-26 19:32:29.987965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.884 #49 NEW cov: 12554 ft: 15521 corp: 33/465b lim: 30 exec/s: 49 rss: 75Mb L: 18/26 MS: 1 ChangeByte- 00:05:54.884 [2024-11-26 19:32:30.027426] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:30.027545] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.884 [2024-11-26 19:32:30.027741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.027767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.027825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.027840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.885 #50 NEW cov: 12554 ft: 15527 corp: 34/478b lim: 30 exec/s: 50 rss: 75Mb L: 13/26 MS: 1 ShuffleBytes- 00:05:54.885 [2024-11-26 19:32:30.067670] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.885 [2024-11-26 19:32:30.067789] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.885 [2024-11-26 19:32:30.067897] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b07 00:05:54.885 [2024-11-26 19:32:30.068007] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.885 [2024-11-26 19:32:30.068215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.068242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.068299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b5b8307 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.068315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.068372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.068387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.068440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5b5b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.068454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.885 #51 NEW cov: 12554 ft: 15571 corp: 35/502b lim: 30 exec/s: 51 rss: 75Mb L: 24/26 MS: 1 CopyPart- 00:05:54.885 [2024-11-26 19:32:30.127749] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005b5b 00:05:54.885 [2024-11-26 19:32:30.127866] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d5d5 00:05:54.885 [2024-11-26 19:32:30.127970] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:05:54.885 [2024-11-26 19:32:30.128189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:405b835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.128218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.128273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b098101 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.128288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.885 [2024-11-26 19:32:30.128346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:c72a0210 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:54.885 [2024-11-26 19:32:30.128360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.885 #52 NEW cov: 12554 ft: 15586 corp: 36/522b lim: 30 exec/s: 26 rss: 75Mb L: 20/26 MS: 1 CMP- DE: "\001\221\325\325\307*\020R"- 00:05:54.885 #52 DONE cov: 12554 ft: 15586 corp: 36/522b lim: 30 exec/s: 26 rss: 75Mb 00:05:54.885 ###### Recommended dictionary. ###### 00:05:54.885 "\000\221\325\324\242j\006\350" # Uses: 3 00:05:54.885 "\377\377\377\377" # Uses: 1 00:05:54.885 "\377\366" # Uses: 0 00:05:54.885 "\020\000\000\000\000\000\000\000" # Uses: 0 00:05:54.885 "\001\221\325\325\307*\020R" # Uses: 0 00:05:54.885 ###### End of recommended dictionary. ###### 00:05:54.885 Done 52 runs in 2 second(s) 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:55.157 19:32:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:05:55.157 [2024-11-26 19:32:30.325105] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:55.157 [2024-11-26 19:32:30.325175] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119194 ] 00:05:55.416 [2024-11-26 19:32:30.585331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.416 [2024-11-26 19:32:30.636215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.416 [2024-11-26 19:32:30.695855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.416 [2024-11-26 19:32:30.712253] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:05:55.675 INFO: Running with entropic power schedule (0xFF, 100). 00:05:55.675 INFO: Seed: 4003053498 00:05:55.675 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:05:55.675 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:05:55.675 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:55.675 INFO: A corpus is not provided, starting from an empty corpus 00:05:55.675 #2 INITED exec/s: 0 rss: 65Mb 00:05:55.675 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:55.675 This may also happen if the target rejected all inputs we tried so far 00:05:55.675 [2024-11-26 19:32:30.783113] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.675 [2024-11-26 19:32:30.783392] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.675 [2024-11-26 19:32:30.783868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0900000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.675 [2024-11-26 19:32:30.783909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.675 [2024-11-26 19:32:30.783984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.675 [2024-11-26 19:32:30.784003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.675 [2024-11-26 19:32:30.784083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.675 [2024-11-26 19:32:30.784103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.935 NEW_FUNC[1/716]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:05:55.935 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:55.935 #16 NEW cov: 12270 ft: 12271 corp: 2/26b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 4 CrossOver-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:05:55.935 [2024-11-26 19:32:31.123371] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.935 [2024-11-26 19:32:31.123562] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.935 [2024-11-26 19:32:31.123934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.123976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.935 [2024-11-26 19:32:31.124111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.124132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.935 [2024-11-26 19:32:31.124266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.124293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.935 #20 NEW cov: 12384 ft: 13053 corp: 3/48b lim: 35 exec/s: 0 rss: 73Mb L: 22/25 MS: 4 CrossOver-ShuffleBytes-ChangeBit-CrossOver- 00:05:55.935 [2024-11-26 19:32:31.173451] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.935 [2024-11-26 19:32:31.173851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.173884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.935 [2024-11-26 19:32:31.174015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.174037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.935 #22 NEW cov: 12390 ft: 13625 corp: 4/63b lim: 35 exec/s: 0 rss: 73Mb L: 15/25 MS: 2 ShuffleBytes-CrossOver- 00:05:55.935 [2024-11-26 19:32:31.223677] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.935 [2024-11-26 19:32:31.223868] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:55.935 [2024-11-26 19:32:31.224260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.224291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.935 [2024-11-26 19:32:31.224424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.224448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.935 [2024-11-26 19:32:31.224575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.935 [2024-11-26 19:32:31.224602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.195 #23 NEW cov: 12475 ft: 13850 corp: 5/85b lim: 35 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 CopyPart- 00:05:56.195 [2024-11-26 19:32:31.293886] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.294095] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.294475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.294506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.294641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.294667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.294793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.294819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.195 #24 NEW cov: 12475 ft: 13902 corp: 6/107b lim: 35 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 CrossOver- 00:05:56.195 [2024-11-26 19:32:31.343947] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.344341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.344370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.344500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.344525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.195 #25 NEW cov: 12475 ft: 13994 corp: 7/122b lim: 35 exec/s: 0 rss: 74Mb L: 15/25 MS: 1 ShuffleBytes- 00:05:56.195 [2024-11-26 19:32:31.414208] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.414394] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.414811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.414842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.414964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.414987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.415122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000c500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.415144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.195 #26 NEW cov: 12475 ft: 14040 corp: 8/144b lim: 35 exec/s: 0 rss: 74Mb L: 22/25 MS: 1 ChangeByte- 00:05:56.195 [2024-11-26 19:32:31.464395] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.464577] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.195 [2024-11-26 19:32:31.464961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00001000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.464990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.465122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.465142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.195 [2024-11-26 19:32:31.465279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.195 [2024-11-26 19:32:31.465303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.454 #27 NEW cov: 12475 ft: 14131 corp: 9/166b lim: 35 exec/s: 0 rss: 74Mb L: 22/25 MS: 1 ChangeBit- 00:05:56.454 [2024-11-26 19:32:31.535438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.454 [2024-11-26 19:32:31.535466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.454 [2024-11-26 19:32:31.535607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.454 [2024-11-26 19:32:31.535625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.454 [2024-11-26 19:32:31.535760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.454 [2024-11-26 19:32:31.535780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.454 [2024-11-26 19:32:31.535914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.454 [2024-11-26 19:32:31.535930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:56.454 #28 NEW cov: 12475 ft: 14668 corp: 10/196b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:05:56.454 [2024-11-26 19:32:31.584686] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.454 [2024-11-26 19:32:31.585065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.585095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.585228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.585257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.455 #29 NEW cov: 12475 ft: 14717 corp: 11/214b lim: 35 exec/s: 0 rss: 74Mb L: 18/30 MS: 1 InsertRepeatedBytes- 00:05:56.455 [2024-11-26 19:32:31.655037] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.455 [2024-11-26 19:32:31.655219] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.455 [2024-11-26 19:32:31.655605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.655639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.655769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.655797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.655934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.655953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.455 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:56.455 #30 NEW cov: 12498 ft: 14747 corp: 12/237b lim: 35 exec/s: 0 rss: 74Mb L: 23/30 MS: 1 InsertByte- 00:05:56.455 [2024-11-26 19:32:31.726241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.726280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.726423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.726449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.726604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.726628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.455 [2024-11-26 19:32:31.726779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.455 [2024-11-26 19:32:31.726802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:56.714 #31 NEW cov: 12498 ft: 14864 corp: 13/267b lim: 35 exec/s: 31 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:05:56.714 [2024-11-26 19:32:31.795502] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.714 [2024-11-26 19:32:31.795691] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.714 [2024-11-26 19:32:31.796064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.796095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.796226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.796255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.796390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.796416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.714 #32 NEW cov: 12498 ft: 14874 corp: 14/293b lim: 35 exec/s: 32 rss: 74Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:05:56.714 [2024-11-26 19:32:31.845621] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.714 [2024-11-26 19:32:31.846003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.846032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.846159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.846180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.714 #33 NEW cov: 12498 ft: 14913 corp: 15/311b lim: 35 exec/s: 33 rss: 74Mb L: 18/30 MS: 1 CopyPart- 00:05:56.714 [2024-11-26 19:32:31.915769] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.714 [2024-11-26 19:32:31.916187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00008800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.916216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.916350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.916370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.714 #34 NEW cov: 12498 ft: 14930 corp: 16/326b lim: 35 exec/s: 34 rss: 74Mb L: 15/30 MS: 1 ChangeByte- 00:05:56.714 [2024-11-26 19:32:31.966207] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.714 [2024-11-26 19:32:31.966605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a090040 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.966634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.966763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000088 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.966782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.714 [2024-11-26 19:32:31.966925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.714 [2024-11-26 19:32:31.966951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.714 #35 NEW cov: 12498 ft: 14970 corp: 17/348b lim: 35 exec/s: 35 rss: 74Mb L: 22/30 MS: 1 CrossOver- 00:05:56.973 [2024-11-26 19:32:32.037050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.037079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.037213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.037232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.037367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.037385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.037530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.037552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:56.973 #36 NEW cov: 12498 ft: 14981 corp: 18/378b lim: 35 exec/s: 36 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:05:56.973 [2024-11-26 19:32:32.106338] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.973 [2024-11-26 19:32:32.106720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f6f7000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.106750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.106881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.106903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.973 #37 NEW cov: 12498 ft: 15099 corp: 19/396b lim: 35 exec/s: 37 rss: 74Mb L: 18/30 MS: 1 ChangeBinInt- 00:05:56.973 [2024-11-26 19:32:32.156778] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.973 [2024-11-26 19:32:32.157194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0af7000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.157224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.157358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff800ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.157377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.157508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.157532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.973 #38 NEW cov: 12498 ft: 15111 corp: 20/422b lim: 35 exec/s: 38 rss: 74Mb L: 26/30 MS: 1 ChangeBinInt- 00:05:56.973 [2024-11-26 19:32:32.226775] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:56.973 [2024-11-26 19:32:32.227184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.227214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.227342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.227370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.973 #39 NEW cov: 12498 ft: 15183 corp: 21/437b lim: 35 exec/s: 39 rss: 74Mb L: 15/30 MS: 1 ShuffleBytes- 00:05:56.973 [2024-11-26 19:32:32.277820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.277849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.277989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.278009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.278147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.278168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.973 [2024-11-26 19:32:32.278304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.973 [2024-11-26 19:32:32.278322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.233 #40 NEW cov: 12498 ft: 15200 corp: 22/467b lim: 35 exec/s: 40 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:05:57.233 [2024-11-26 19:32:32.327332] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:57.233 [2024-11-26 19:32:32.327754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0af7000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.327787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.327930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff800ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.327948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.328078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.328098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.233 #41 NEW cov: 12498 ft: 15242 corp: 23/492b lim: 35 exec/s: 41 rss: 74Mb L: 25/30 MS: 1 EraseBytes- 00:05:57.233 [2024-11-26 19:32:32.398252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.398283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.398412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.398434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.398577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.398594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.398747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.398766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.233 #42 NEW cov: 12498 ft: 15259 corp: 24/521b lim: 35 exec/s: 42 rss: 74Mb L: 29/30 MS: 1 EraseBytes- 00:05:57.233 [2024-11-26 19:32:32.467860] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:57.233 [2024-11-26 19:32:32.468252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:5c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.468281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.468413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:21d700b0 cdw11:0000d591 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.468435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.233 [2024-11-26 19:32:32.468566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.468589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.233 #43 NEW cov: 12498 ft: 15265 corp: 25/547b lim: 35 exec/s: 43 rss: 74Mb L: 26/30 MS: 1 CMP- DE: "\\\354\260!\327\325\221\000"- 00:05:57.233 [2024-11-26 19:32:32.518642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.233 [2024-11-26 19:32:32.518669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.234 [2024-11-26 19:32:32.518791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.234 [2024-11-26 19:32:32.518807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.234 [2024-11-26 19:32:32.518935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.234 [2024-11-26 19:32:32.518951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.234 [2024-11-26 19:32:32.519080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.234 [2024-11-26 19:32:32.519098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.234 #44 NEW cov: 12498 ft: 15290 corp: 26/579b lim: 35 exec/s: 44 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:05:57.493 [2024-11-26 19:32:32.568035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a09000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.568063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.493 #45 NEW cov: 12498 ft: 15584 corp: 27/590b lim: 35 exec/s: 45 rss: 74Mb L: 11/32 MS: 1 EraseBytes- 00:05:57.493 [2024-11-26 19:32:32.618073] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:57.493 [2024-11-26 19:32:32.618434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ecb0005c cdw11:d50021d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.618463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.493 [2024-11-26 19:32:32.618603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.618626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.493 #46 NEW cov: 12498 ft: 15604 corp: 28/608b lim: 35 exec/s: 46 rss: 74Mb L: 18/32 MS: 1 PersAutoDict- DE: "\\\354\260!\327\325\221\000"- 00:05:57.493 [2024-11-26 19:32:32.669180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.669207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.493 [2024-11-26 19:32:32.669342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.669365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.493 [2024-11-26 19:32:32.669499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.669518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.493 [2024-11-26 19:32:32.669655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.493 [2024-11-26 19:32:32.669672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.493 #47 NEW cov: 12498 ft: 15617 corp: 29/638b lim: 35 exec/s: 47 rss: 75Mb L: 30/32 MS: 1 InsertByte- 00:05:57.494 [2024-11-26 19:32:32.739384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ec00ff5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.494 [2024-11-26 19:32:32.739412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.494 [2024-11-26 19:32:32.739544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d50021 cdw11:ff009100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.494 [2024-11-26 19:32:32.739560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.494 [2024-11-26 19:32:32.739695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.494 [2024-11-26 19:32:32.739715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.494 [2024-11-26 19:32:32.739837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.494 [2024-11-26 19:32:32.739854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.494 #48 NEW cov: 12498 ft: 15637 corp: 30/668b lim: 35 exec/s: 24 rss: 75Mb L: 30/32 MS: 1 PersAutoDict- DE: "\\\354\260!\327\325\221\000"- 00:05:57.494 #48 DONE cov: 12498 ft: 15637 corp: 30/668b lim: 35 exec/s: 24 rss: 75Mb 00:05:57.494 ###### Recommended dictionary. ###### 00:05:57.494 "\\\354\260!\327\325\221\000" # Uses: 2 00:05:57.494 ###### End of recommended dictionary. ###### 00:05:57.494 Done 48 runs in 2 second(s) 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:57.753 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:57.754 19:32:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:05:57.754 [2024-11-26 19:32:32.909127] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:05:57.754 [2024-11-26 19:32:32.909200] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119730 ] 00:05:58.013 [2024-11-26 19:32:33.168446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.013 [2024-11-26 19:32:33.228019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.013 [2024-11-26 19:32:33.287038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.013 [2024-11-26 19:32:33.303404] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:05:58.013 INFO: Running with entropic power schedule (0xFF, 100). 00:05:58.013 INFO: Seed: 2300063843 00:05:58.272 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:05:58.272 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:05:58.272 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:58.272 INFO: A corpus is not provided, starting from an empty corpus 00:05:58.272 #2 INITED exec/s: 0 rss: 65Mb 00:05:58.272 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:58.272 This may also happen if the target rejected all inputs we tried so far 00:05:58.530 NEW_FUNC[1/705]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:05:58.530 NEW_FUNC[2/705]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:58.530 #6 NEW cov: 12137 ft: 12126 corp: 2/6b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 4 ChangeBit-InsertByte-ChangeBit-InsertRepeatedBytes- 00:05:58.530 #7 NEW cov: 12267 ft: 12782 corp: 3/11b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:05:58.530 #8 NEW cov: 12273 ft: 12973 corp: 4/16b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:05:58.789 #9 NEW cov: 12358 ft: 13249 corp: 5/21b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:05:58.789 #12 NEW cov: 12384 ft: 13757 corp: 6/40b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 3 EraseBytes-CopyPart-InsertRepeatedBytes- 00:05:58.789 #14 NEW cov: 12384 ft: 13862 corp: 7/46b lim: 20 exec/s: 0 rss: 73Mb L: 6/19 MS: 2 ChangeBit-CrossOver- 00:05:58.789 #15 NEW cov: 12384 ft: 13919 corp: 8/65b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 ChangeBinInt- 00:05:59.047 #16 NEW cov: 12392 ft: 14063 corp: 9/78b lim: 20 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:05:59.047 #17 NEW cov: 12392 ft: 14093 corp: 10/84b lim: 20 exec/s: 0 rss: 74Mb L: 6/19 MS: 1 InsertByte- 00:05:59.047 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:59.047 #18 NEW cov: 12415 ft: 14177 corp: 11/97b lim: 20 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 ChangeBinInt- 00:05:59.047 #19 NEW cov: 12415 ft: 14209 corp: 12/102b lim: 20 exec/s: 0 rss: 74Mb L: 5/19 MS: 1 CMP- DE: "\000\015"- 00:05:59.305 #20 NEW cov: 12415 ft: 14243 corp: 13/117b lim: 20 exec/s: 20 rss: 74Mb L: 15/19 MS: 1 InsertRepeatedBytes- 00:05:59.305 #21 NEW cov: 12415 ft: 14253 corp: 14/136b lim: 20 exec/s: 21 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:05:59.305 #22 NEW cov: 12415 ft: 14283 corp: 15/142b lim: 20 exec/s: 22 rss: 74Mb L: 6/19 MS: 1 CopyPart- 00:05:59.305 NEW_FUNC[1/4]: 0x1379068 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:05:59.305 NEW_FUNC[2/4]: 0x1379be8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:05:59.305 #23 NEW cov: 12498 ft: 14434 corp: 16/147b lim: 20 exec/s: 23 rss: 74Mb L: 5/19 MS: 1 ShuffleBytes- 00:05:59.563 #24 NEW cov: 12498 ft: 14498 corp: 17/166b lim: 20 exec/s: 24 rss: 74Mb L: 19/19 MS: 1 CrossOver- 00:05:59.563 #25 NEW cov: 12498 ft: 14512 corp: 18/171b lim: 20 exec/s: 25 rss: 74Mb L: 5/19 MS: 1 ChangeBinInt- 00:05:59.563 #26 NEW cov: 12499 ft: 14726 corp: 19/179b lim: 20 exec/s: 26 rss: 74Mb L: 8/19 MS: 1 CopyPart- 00:05:59.563 #27 NEW cov: 12499 ft: 14806 corp: 20/198b lim: 20 exec/s: 27 rss: 74Mb L: 19/19 MS: 1 ChangeBit- 00:05:59.563 #28 NEW cov: 12499 ft: 14846 corp: 21/203b lim: 20 exec/s: 28 rss: 74Mb L: 5/19 MS: 1 ChangeBit- 00:05:59.822 #29 NEW cov: 12499 ft: 14863 corp: 22/211b lim: 20 exec/s: 29 rss: 74Mb L: 8/19 MS: 1 InsertRepeatedBytes- 00:05:59.822 #30 NEW cov: 12499 ft: 14944 corp: 23/217b lim: 20 exec/s: 30 rss: 74Mb L: 6/19 MS: 1 CrossOver- 00:05:59.822 #31 NEW cov: 12499 ft: 14958 corp: 24/226b lim: 20 exec/s: 31 rss: 74Mb L: 9/19 MS: 1 InsertRepeatedBytes- 00:05:59.822 #32 NEW cov: 12499 ft: 14997 corp: 25/245b lim: 20 exec/s: 32 rss: 74Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:00.081 #33 NEW cov: 12499 ft: 15013 corp: 26/258b lim: 20 exec/s: 33 rss: 75Mb L: 13/19 MS: 1 ChangeByte- 00:06:00.081 #34 NEW cov: 12499 ft: 15044 corp: 27/278b lim: 20 exec/s: 34 rss: 75Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:00.081 #35 NEW cov: 12499 ft: 15048 corp: 28/291b lim: 20 exec/s: 35 rss: 75Mb L: 13/20 MS: 1 InsertRepeatedBytes- 00:06:00.081 #36 NEW cov: 12499 ft: 15053 corp: 29/295b lim: 20 exec/s: 36 rss: 75Mb L: 4/20 MS: 1 EraseBytes- 00:06:00.081 #37 NEW cov: 12499 ft: 15067 corp: 30/306b lim: 20 exec/s: 18 rss: 75Mb L: 11/20 MS: 1 InsertRepeatedBytes- 00:06:00.081 #37 DONE cov: 12499 ft: 15067 corp: 30/306b lim: 20 exec/s: 18 rss: 75Mb 00:06:00.081 ###### Recommended dictionary. ###### 00:06:00.081 "\377\377\377\377\377\377\377\377" # Uses: 0 00:06:00.081 "\000\015" # Uses: 0 00:06:00.081 ###### End of recommended dictionary. ###### 00:06:00.081 Done 37 runs in 2 second(s) 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:00.344 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:00.345 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:00.345 19:32:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:00.345 [2024-11-26 19:32:35.521843] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:00.345 [2024-11-26 19:32:35.521908] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120263 ] 00:06:00.701 [2024-11-26 19:32:35.775775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.701 [2024-11-26 19:32:35.826462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.701 [2024-11-26 19:32:35.885568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.701 [2024-11-26 19:32:35.901936] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:00.701 INFO: Running with entropic power schedule (0xFF, 100). 00:06:00.701 INFO: Seed: 603107001 00:06:00.701 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:00.701 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:00.701 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:00.701 INFO: A corpus is not provided, starting from an empty corpus 00:06:00.701 #2 INITED exec/s: 0 rss: 65Mb 00:06:00.701 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:00.701 This may also happen if the target rejected all inputs we tried so far 00:06:00.701 [2024-11-26 19:32:35.951185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:00.701 [2024-11-26 19:32:35.951215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.010 NEW_FUNC[1/717]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:01.010 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:01.010 #35 NEW cov: 12266 ft: 12265 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 3 InsertByte-CrossOver-CMP- DE: "\010\000\000\000\000\000\000\000"- 00:06:01.010 [2024-11-26 19:32:36.282170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.010 [2024-11-26 19:32:36.282202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.010 [2024-11-26 19:32:36.282257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.010 [2024-11-26 19:32:36.282271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.271 #36 NEW cov: 12395 ft: 13461 corp: 3/29b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 PersAutoDict- DE: "\010\000\000\000\000\000\000\000"- 00:06:01.271 [2024-11-26 19:32:36.342093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.342121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.271 #37 NEW cov: 12401 ft: 13833 corp: 4/40b lim: 35 exec/s: 0 rss: 73Mb L: 11/18 MS: 1 CrossOver- 00:06:01.271 [2024-11-26 19:32:36.382178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a08210a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.382207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.271 #39 NEW cov: 12486 ft: 14065 corp: 5/52b lim: 35 exec/s: 0 rss: 73Mb L: 12/18 MS: 2 InsertByte-CrossOver- 00:06:01.271 [2024-11-26 19:32:36.422250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.422275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.271 #40 NEW cov: 12486 ft: 14183 corp: 6/60b lim: 35 exec/s: 0 rss: 73Mb L: 8/18 MS: 1 InsertRepeatedBytes- 00:06:01.271 [2024-11-26 19:32:36.462412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.462438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.271 #41 NEW cov: 12486 ft: 14238 corp: 7/67b lim: 35 exec/s: 0 rss: 73Mb L: 7/18 MS: 1 EraseBytes- 00:06:01.271 [2024-11-26 19:32:36.522777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.522804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.271 [2024-11-26 19:32:36.522859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a0a8221 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.271 [2024-11-26 19:32:36.522873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.271 #42 NEW cov: 12486 ft: 14330 corp: 8/87b lim: 35 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CMP- DE: "\377\220\325\331}8,\202"- 00:06:01.529 [2024-11-26 19:32:36.582794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.582823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.529 #43 NEW cov: 12486 ft: 14349 corp: 9/97b lim: 35 exec/s: 0 rss: 73Mb L: 10/20 MS: 1 ShuffleBytes- 00:06:01.529 [2024-11-26 19:32:36.622856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a3b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.622883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.529 #44 NEW cov: 12486 ft: 14368 corp: 10/104b lim: 35 exec/s: 0 rss: 73Mb L: 7/20 MS: 1 ChangeByte- 00:06:01.529 [2024-11-26 19:32:36.683340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.683367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.529 [2024-11-26 19:32:36.683420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.683434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.529 [2024-11-26 19:32:36.683484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.683498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.529 #45 NEW cov: 12486 ft: 14638 corp: 11/129b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:01.529 [2024-11-26 19:32:36.723317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.723350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.529 [2024-11-26 19:32:36.723405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.529 [2024-11-26 19:32:36.723419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.529 #46 NEW cov: 12486 ft: 14651 corp: 12/147b lim: 35 exec/s: 0 rss: 74Mb L: 18/25 MS: 1 ChangeByte- 00:06:01.529 [2024-11-26 19:32:36.783332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.530 [2024-11-26 19:32:36.783358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.530 #47 NEW cov: 12486 ft: 14694 corp: 13/158b lim: 35 exec/s: 0 rss: 74Mb L: 11/25 MS: 1 ShuffleBytes- 00:06:01.530 [2024-11-26 19:32:36.823441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.530 [2024-11-26 19:32:36.823467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.788 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:01.788 #48 NEW cov: 12509 ft: 14727 corp: 14/169b lim: 35 exec/s: 0 rss: 74Mb L: 11/25 MS: 1 ChangeBinInt- 00:06:01.789 [2024-11-26 19:32:36.883628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a3b cdw11:001c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:36.883654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.789 #49 NEW cov: 12509 ft: 14729 corp: 15/180b lim: 35 exec/s: 0 rss: 74Mb L: 11/25 MS: 1 CMP- DE: "\034\001\000\000"- 00:06:01.789 [2024-11-26 19:32:36.943958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:36.943984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.789 [2024-11-26 19:32:36.944039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:36.944053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.789 #50 NEW cov: 12509 ft: 14771 corp: 16/198b lim: 35 exec/s: 50 rss: 74Mb L: 18/25 MS: 1 ShuffleBytes- 00:06:01.789 [2024-11-26 19:32:37.004116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:37.004143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.789 [2024-11-26 19:32:37.004195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:37.004209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.789 #51 NEW cov: 12509 ft: 14812 corp: 17/217b lim: 35 exec/s: 51 rss: 74Mb L: 19/25 MS: 1 InsertByte- 00:06:01.789 [2024-11-26 19:32:37.064117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.789 [2024-11-26 19:32:37.064144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.047 #52 NEW cov: 12509 ft: 14856 corp: 18/225b lim: 35 exec/s: 52 rss: 74Mb L: 8/25 MS: 1 PersAutoDict- DE: "\377\220\325\331}8,\202"- 00:06:02.048 [2024-11-26 19:32:37.124437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08001108 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.124463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.124516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.124529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.048 #53 NEW cov: 12509 ft: 14861 corp: 19/243b lim: 35 exec/s: 53 rss: 74Mb L: 18/25 MS: 1 ChangeBinInt- 00:06:02.048 [2024-11-26 19:32:37.164534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08001108 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.164560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.164615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00600000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.164646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.048 #54 NEW cov: 12509 ft: 14946 corp: 20/261b lim: 35 exec/s: 54 rss: 74Mb L: 18/25 MS: 1 ShuffleBytes- 00:06:02.048 [2024-11-26 19:32:37.224904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.224929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.224983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.224996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.225047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00001c01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.225060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.048 #55 NEW cov: 12509 ft: 14971 corp: 21/284b lim: 35 exec/s: 55 rss: 74Mb L: 23/25 MS: 1 PersAutoDict- DE: "\034\001\000\000"- 00:06:02.048 [2024-11-26 19:32:37.284907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.284932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.284985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.284999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.048 #56 NEW cov: 12509 ft: 15043 corp: 22/302b lim: 35 exec/s: 56 rss: 74Mb L: 18/25 MS: 1 ShuffleBytes- 00:06:02.048 [2024-11-26 19:32:37.324975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.325001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.048 [2024-11-26 19:32:37.325055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a0a8221 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.048 [2024-11-26 19:32:37.325072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.307 #57 NEW cov: 12509 ft: 15056 corp: 23/322b lim: 35 exec/s: 57 rss: 75Mb L: 20/25 MS: 1 PersAutoDict- DE: "\010\000\000\000\000\000\000\000"- 00:06:02.307 [2024-11-26 19:32:37.385162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.307 [2024-11-26 19:32:37.385188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.307 [2024-11-26 19:32:37.385240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.307 [2024-11-26 19:32:37.385253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.308 #58 NEW cov: 12509 ft: 15076 corp: 24/341b lim: 35 exec/s: 58 rss: 75Mb L: 19/25 MS: 1 ChangeBit- 00:06:02.308 [2024-11-26 19:32:37.425583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a080a08 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.425613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.425666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:60000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.425681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.425732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.425746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.425798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.425811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.308 #59 NEW cov: 12509 ft: 15410 corp: 25/374b lim: 35 exec/s: 59 rss: 75Mb L: 33/33 MS: 1 CrossOver- 00:06:02.308 [2024-11-26 19:32:37.465220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:e3630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.465246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.308 #60 NEW cov: 12509 ft: 15415 corp: 26/382b lim: 35 exec/s: 60 rss: 75Mb L: 8/33 MS: 1 ChangeBit- 00:06:02.308 [2024-11-26 19:32:37.505787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.505812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.505868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.505881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.505934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c820000 cdw11:210a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.505948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.506006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.506021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.565825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.565851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.565904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8221002c cdw11:0a0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.565918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.565970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.565984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.308 #62 NEW cov: 12509 ft: 15452 corp: 27/404b lim: 35 exec/s: 62 rss: 75Mb L: 22/33 MS: 2 CrossOver-EraseBytes- 00:06:02.308 [2024-11-26 19:32:37.605777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:48380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.605803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.308 [2024-11-26 19:32:37.605855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a0a8221 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.308 [2024-11-26 19:32:37.605868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.568 #63 NEW cov: 12509 ft: 15470 corp: 28/424b lim: 35 exec/s: 63 rss: 75Mb L: 20/33 MS: 1 ChangeByte- 00:06:02.568 [2024-11-26 19:32:37.646356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08001108 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.646382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.646436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.646450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.646503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:08080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.646518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.646568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.646582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.646638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.646652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:02.568 #64 NEW cov: 12509 ft: 15522 corp: 29/459b lim: 35 exec/s: 64 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:06:02.568 [2024-11-26 19:32:37.686170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d5d9ff90 cdw11:7d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.686196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.686251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0100001c cdw11:002c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.686264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.686317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0a08210a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.686346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.568 #65 NEW cov: 12509 ft: 15551 corp: 30/485b lim: 35 exec/s: 65 rss: 75Mb L: 26/35 MS: 1 PersAutoDict- DE: "\034\001\000\000"- 00:06:02.568 [2024-11-26 19:32:37.746004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.746029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.568 #67 NEW cov: 12509 ft: 15619 corp: 31/495b lim: 35 exec/s: 67 rss: 75Mb L: 10/35 MS: 2 CopyPart-PersAutoDict- DE: "\010\000\000\000\000\000\000\000"- 00:06:02.568 [2024-11-26 19:32:37.786491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a080a08 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.786516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.786568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:60000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.786583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.568 [2024-11-26 19:32:37.786658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.786673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.568 #68 NEW cov: 12509 ft: 15622 corp: 32/517b lim: 35 exec/s: 68 rss: 75Mb L: 22/35 MS: 1 EraseBytes- 00:06:02.568 [2024-11-26 19:32:37.846307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.568 [2024-11-26 19:32:37.846334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.568 #69 NEW cov: 12509 ft: 15645 corp: 33/527b lim: 35 exec/s: 69 rss: 75Mb L: 10/35 MS: 1 EraseBytes- 00:06:02.828 [2024-11-26 19:32:37.886394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:e3630000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.828 [2024-11-26 19:32:37.886420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.828 #70 NEW cov: 12509 ft: 15676 corp: 34/534b lim: 35 exec/s: 70 rss: 75Mb L: 7/35 MS: 1 EraseBytes- 00:06:02.828 [2024-11-26 19:32:37.946776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.828 [2024-11-26 19:32:37.946802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.829 [2024-11-26 19:32:37.946862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:d5d9ff90 cdw11:7d380000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.829 [2024-11-26 19:32:37.946876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.829 #71 NEW cov: 12509 ft: 15702 corp: 35/552b lim: 35 exec/s: 35 rss: 75Mb L: 18/35 MS: 1 PersAutoDict- DE: "\377\220\325\331}8,\202"- 00:06:02.829 #71 DONE cov: 12509 ft: 15702 corp: 35/552b lim: 35 exec/s: 35 rss: 75Mb 00:06:02.829 ###### Recommended dictionary. ###### 00:06:02.829 "\010\000\000\000\000\000\000\000" # Uses: 3 00:06:02.829 "\377\220\325\331}8,\202" # Uses: 2 00:06:02.829 "\034\001\000\000" # Uses: 2 00:06:02.829 ###### End of recommended dictionary. ###### 00:06:02.829 Done 71 runs in 2 second(s) 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:02.829 19:32:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:03.088 [2024-11-26 19:32:38.140176] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:03.088 [2024-11-26 19:32:38.140247] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120585 ] 00:06:03.346 [2024-11-26 19:32:38.400516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.346 [2024-11-26 19:32:38.455551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.346 [2024-11-26 19:32:38.514735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.346 [2024-11-26 19:32:38.531094] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:03.346 INFO: Running with entropic power schedule (0xFF, 100). 00:06:03.346 INFO: Seed: 3234109630 00:06:03.346 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:03.346 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:03.346 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:03.346 INFO: A corpus is not provided, starting from an empty corpus 00:06:03.346 #2 INITED exec/s: 0 rss: 65Mb 00:06:03.346 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:03.346 This may also happen if the target rejected all inputs we tried so far 00:06:03.346 [2024-11-26 19:32:38.586692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.346 [2024-11-26 19:32:38.586722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.346 [2024-11-26 19:32:38.586780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.347 [2024-11-26 19:32:38.586795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.605 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:03.605 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:03.605 #25 NEW cov: 12281 ft: 12291 corp: 2/20b lim: 45 exec/s: 0 rss: 73Mb L: 19/19 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:03.864 [2024-11-26 19:32:38.917736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.864 [2024-11-26 19:32:38.917777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.864 [2024-11-26 19:32:38.917843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.864 [2024-11-26 19:32:38.917861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.864 [2024-11-26 19:32:38.917923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.864 [2024-11-26 19:32:38.917941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.864 NEW_FUNC[1/1]: 0x1fe8b78 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:833 00:06:03.864 #26 NEW cov: 12406 ft: 13118 corp: 3/55b lim: 45 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:03.864 [2024-11-26 19:32:38.987612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.864 [2024-11-26 19:32:38.987639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.864 [2024-11-26 19:32:38.987695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.864 [2024-11-26 19:32:38.987709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.864 #27 NEW cov: 12412 ft: 13399 corp: 4/81b lim: 45 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 CopyPart- 00:06:03.865 [2024-11-26 19:32:39.028052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.028081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.028140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.028158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.028214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:06060000 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.028229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.028284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.028297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:03.865 #28 NEW cov: 12497 ft: 14002 corp: 5/121b lim: 45 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:03.865 [2024-11-26 19:32:39.088229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.088255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.088311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.088325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.088379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.088393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.865 [2024-11-26 19:32:39.088447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.088460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:03.865 #34 NEW cov: 12497 ft: 14035 corp: 6/164b lim: 45 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:06:03.865 [2024-11-26 19:32:39.147928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.865 [2024-11-26 19:32:39.147954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.865 #35 NEW cov: 12497 ft: 14796 corp: 7/181b lim: 45 exec/s: 0 rss: 73Mb L: 17/43 MS: 1 EraseBytes- 00:06:04.123 [2024-11-26 19:32:39.188488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.188515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.188571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.188585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.188642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.188656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.188729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.188747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.123 #36 NEW cov: 12497 ft: 14895 corp: 8/224b lim: 45 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:06:04.123 [2024-11-26 19:32:39.228431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.228459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.228514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.228528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.228581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.228595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.123 #37 NEW cov: 12497 ft: 14914 corp: 9/251b lim: 45 exec/s: 0 rss: 73Mb L: 27/43 MS: 1 InsertByte- 00:06:04.123 [2024-11-26 19:32:39.268232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.268259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.123 #38 NEW cov: 12497 ft: 15049 corp: 10/268b lim: 45 exec/s: 0 rss: 73Mb L: 17/43 MS: 1 EraseBytes- 00:06:04.123 [2024-11-26 19:32:39.308815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.123 [2024-11-26 19:32:39.308843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.123 [2024-11-26 19:32:39.308897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.308911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.308966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00060000 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.308981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.309036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.309049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.124 #39 NEW cov: 12497 ft: 15113 corp: 11/309b lim: 45 exec/s: 0 rss: 74Mb L: 41/43 MS: 1 InsertByte- 00:06:04.124 [2024-11-26 19:32:39.368890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff0a03 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.368916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.368969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.368983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.369036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.369052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.124 #48 NEW cov: 12497 ft: 15127 corp: 12/343b lim: 45 exec/s: 0 rss: 74Mb L: 34/43 MS: 4 ShuffleBytes-CMP-ShuffleBytes-InsertRepeatedBytes- DE: "\003\000\000\000"- 00:06:04.124 [2024-11-26 19:32:39.408941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.408966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.409022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.409035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.124 [2024-11-26 19:32:39.409089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.124 [2024-11-26 19:32:39.409103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.124 #49 NEW cov: 12497 ft: 15214 corp: 13/373b lim: 45 exec/s: 0 rss: 74Mb L: 30/43 MS: 1 CMP- DE: "\001\000\000\004"- 00:06:04.383 [2024-11-26 19:32:39.449042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.449069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.449128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.449142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.449197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.449227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.383 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:04.383 #50 NEW cov: 12520 ft: 15279 corp: 14/408b lim: 45 exec/s: 0 rss: 74Mb L: 35/43 MS: 1 ChangeBinInt- 00:06:04.383 [2024-11-26 19:32:39.489165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.489191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.489248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.489263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.489321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.489335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.383 #51 NEW cov: 12520 ft: 15318 corp: 15/435b lim: 45 exec/s: 0 rss: 74Mb L: 27/43 MS: 1 CopyPart- 00:06:04.383 [2024-11-26 19:32:39.549363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.549393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.549453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.549467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.549525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.549539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.383 #52 NEW cov: 12520 ft: 15344 corp: 16/470b lim: 45 exec/s: 52 rss: 74Mb L: 35/43 MS: 1 ShuffleBytes- 00:06:04.383 [2024-11-26 19:32:39.589264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.589290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.589345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.589359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 #53 NEW cov: 12520 ft: 15370 corp: 17/489b lim: 45 exec/s: 53 rss: 74Mb L: 19/43 MS: 1 ChangeByte- 00:06:04.383 [2024-11-26 19:32:39.629383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.629410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.629468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.629483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 #54 NEW cov: 12520 ft: 15377 corp: 18/509b lim: 45 exec/s: 54 rss: 74Mb L: 20/43 MS: 1 InsertByte- 00:06:04.383 [2024-11-26 19:32:39.689948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.689974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.690029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.690043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.690096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00060000 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.690111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.383 [2024-11-26 19:32:39.690167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.383 [2024-11-26 19:32:39.690180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.642 #55 NEW cov: 12520 ft: 15392 corp: 19/550b lim: 45 exec/s: 55 rss: 74Mb L: 41/43 MS: 1 ChangeBit- 00:06:04.642 [2024-11-26 19:32:39.750135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.642 [2024-11-26 19:32:39.750161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.642 [2024-11-26 19:32:39.750220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.642 [2024-11-26 19:32:39.750234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.642 [2024-11-26 19:32:39.750288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.642 [2024-11-26 19:32:39.750302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.642 [2024-11-26 19:32:39.750358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.642 [2024-11-26 19:32:39.750371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.643 #56 NEW cov: 12520 ft: 15417 corp: 20/586b lim: 45 exec/s: 56 rss: 74Mb L: 36/43 MS: 1 EraseBytes- 00:06:04.643 [2024-11-26 19:32:39.810094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.810120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.810178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.810192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.810251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.810265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.643 #57 NEW cov: 12520 ft: 15429 corp: 21/613b lim: 45 exec/s: 57 rss: 74Mb L: 27/43 MS: 1 ChangeByte- 00:06:04.643 [2024-11-26 19:32:39.870307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.870335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.870391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00003b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.870405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.870460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.870474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.643 #58 NEW cov: 12520 ft: 15435 corp: 22/640b lim: 45 exec/s: 58 rss: 74Mb L: 27/43 MS: 1 ChangeByte- 00:06:04.643 [2024-11-26 19:32:39.910497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.910523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.910579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.910601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.910656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.910671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.643 [2024-11-26 19:32:39.910725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.910739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.643 #60 NEW cov: 12520 ft: 15488 corp: 23/676b lim: 45 exec/s: 60 rss: 74Mb L: 36/43 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:04.643 [2024-11-26 19:32:39.950142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.643 [2024-11-26 19:32:39.950169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.902 #61 NEW cov: 12520 ft: 15490 corp: 24/693b lim: 45 exec/s: 61 rss: 74Mb L: 17/43 MS: 1 ChangeByte- 00:06:04.902 [2024-11-26 19:32:40.010289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.010324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.902 #67 NEW cov: 12520 ft: 15619 corp: 25/710b lim: 45 exec/s: 67 rss: 74Mb L: 17/43 MS: 1 ChangeBit- 00:06:04.902 [2024-11-26 19:32:40.061001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.061031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.061091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00030000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.061106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.061162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.061175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.061232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.061246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.902 #68 NEW cov: 12520 ft: 15677 corp: 26/746b lim: 45 exec/s: 68 rss: 74Mb L: 36/43 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:06:04.902 [2024-11-26 19:32:40.121110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.121139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.121196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.121210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.121283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.121298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.121354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.121368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.902 #69 NEW cov: 12520 ft: 15687 corp: 27/782b lim: 45 exec/s: 69 rss: 74Mb L: 36/43 MS: 1 ShuffleBytes- 00:06:04.902 [2024-11-26 19:32:40.161247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.161275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.902 [2024-11-26 19:32:40.161332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.902 [2024-11-26 19:32:40.161346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.903 [2024-11-26 19:32:40.161404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00060000 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.903 [2024-11-26 19:32:40.161418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.903 [2024-11-26 19:32:40.161472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.903 [2024-11-26 19:32:40.161486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.903 #70 NEW cov: 12520 ft: 15699 corp: 28/822b lim: 45 exec/s: 70 rss: 74Mb L: 40/43 MS: 1 CrossOver- 00:06:04.903 [2024-11-26 19:32:40.200882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.903 [2024-11-26 19:32:40.200908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 #71 NEW cov: 12520 ft: 15789 corp: 29/839b lim: 45 exec/s: 71 rss: 75Mb L: 17/43 MS: 1 CrossOver- 00:06:05.162 [2024-11-26 19:32:40.261034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.261060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 #72 NEW cov: 12520 ft: 15795 corp: 30/856b lim: 45 exec/s: 72 rss: 75Mb L: 17/43 MS: 1 CrossOver- 00:06:05.162 [2024-11-26 19:32:40.301474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:aeaeffef cdw11:aeae0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.301501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.301558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:aeaeaeae cdw11:aeae0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.301572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.301627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:aeaeaeae cdw11:aeae0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.301662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.162 #77 NEW cov: 12520 ft: 15801 corp: 31/884b lim: 45 exec/s: 77 rss: 75Mb L: 28/43 MS: 5 CMP-InsertByte-ChangeBit-ChangeBit-InsertRepeatedBytes- DE: "\377\377\377\377"- 00:06:05.162 [2024-11-26 19:32:40.341247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:03000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.341273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 #78 NEW cov: 12520 ft: 15827 corp: 32/901b lim: 45 exec/s: 78 rss: 75Mb L: 17/43 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:06:05.162 [2024-11-26 19:32:40.401801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff0a03 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.401828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.401884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.401898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.401952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.401966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.162 #79 NEW cov: 12520 ft: 15853 corp: 33/935b lim: 45 exec/s: 79 rss: 75Mb L: 34/43 MS: 1 ChangeByte- 00:06:05.162 [2024-11-26 19:32:40.462141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.462167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.462225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.462239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.462294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.462308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.162 [2024-11-26 19:32:40.462363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.162 [2024-11-26 19:32:40.462377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.422 #80 NEW cov: 12520 ft: 15876 corp: 34/973b lim: 45 exec/s: 80 rss: 75Mb L: 38/43 MS: 1 InsertRepeatedBytes- 00:06:05.422 [2024-11-26 19:32:40.501891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.422 [2024-11-26 19:32:40.501917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.422 [2024-11-26 19:32:40.501975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.422 [2024-11-26 19:32:40.501990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.422 #81 NEW cov: 12520 ft: 15917 corp: 35/998b lim: 45 exec/s: 81 rss: 75Mb L: 25/43 MS: 1 EraseBytes- 00:06:05.422 [2024-11-26 19:32:40.562193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.422 [2024-11-26 19:32:40.562221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.422 [2024-11-26 19:32:40.562288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.422 [2024-11-26 19:32:40.562303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.422 [2024-11-26 19:32:40.562356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.422 [2024-11-26 19:32:40.562387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.422 #82 NEW cov: 12520 ft: 15921 corp: 36/1025b lim: 45 exec/s: 41 rss: 75Mb L: 27/43 MS: 1 ChangeByte- 00:06:05.422 #82 DONE cov: 12520 ft: 15921 corp: 36/1025b lim: 45 exec/s: 41 rss: 75Mb 00:06:05.422 ###### Recommended dictionary. ###### 00:06:05.422 "\003\000\000\000" # Uses: 3 00:06:05.422 "\001\000\000\004" # Uses: 0 00:06:05.422 "\377\377\377\377" # Uses: 0 00:06:05.422 ###### End of recommended dictionary. ###### 00:06:05.422 Done 82 runs in 2 second(s) 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:05.422 19:32:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:05.682 [2024-11-26 19:32:40.736860] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:05.682 [2024-11-26 19:32:40.736928] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121094 ] 00:06:05.941 [2024-11-26 19:32:40.999788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.941 [2024-11-26 19:32:41.057722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.941 [2024-11-26 19:32:41.116436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.941 [2024-11-26 19:32:41.132805] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:05.941 INFO: Running with entropic power schedule (0xFF, 100). 00:06:05.941 INFO: Seed: 1540140889 00:06:05.941 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:05.941 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:05.941 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:05.941 INFO: A corpus is not provided, starting from an empty corpus 00:06:05.941 #2 INITED exec/s: 0 rss: 65Mb 00:06:05.941 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:05.941 This may also happen if the target rejected all inputs we tried so far 00:06:05.941 [2024-11-26 19:32:41.199843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:05.941 [2024-11-26 19:32:41.199880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 NEW_FUNC[1/715]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:06.510 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:06.510 #4 NEW cov: 12210 ft: 12209 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ChangeBit-CrossOver- 00:06:06.510 [2024-11-26 19:32:41.550256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a21 cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.550308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #5 NEW cov: 12323 ft: 12905 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:06:06.510 [2024-11-26 19:32:41.590175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.590203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #6 NEW cov: 12329 ft: 13161 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:06:06.510 [2024-11-26 19:32:41.650240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.650267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #7 NEW cov: 12414 ft: 13498 corp: 5/10b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CrossOver- 00:06:06.510 [2024-11-26 19:32:41.690360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.690388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #8 NEW cov: 12414 ft: 13552 corp: 6/12b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 InsertByte- 00:06:06.510 [2024-11-26 19:32:41.730474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a0a cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.730502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #10 NEW cov: 12414 ft: 13584 corp: 7/14b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 2 CopyPart-InsertByte- 00:06:06.510 [2024-11-26 19:32:41.770585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:06.510 [2024-11-26 19:32:41.770618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.510 #11 NEW cov: 12414 ft: 13736 corp: 8/16b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CopyPart- 00:06:06.769 [2024-11-26 19:32:41.820768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005a0a cdw11:00000000 00:06:06.769 [2024-11-26 19:32:41.820795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.769 #12 NEW cov: 12414 ft: 13816 corp: 9/18b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ChangeBit- 00:06:06.769 [2024-11-26 19:32:41.891066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:06.769 [2024-11-26 19:32:41.891095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.769 #13 NEW cov: 12414 ft: 13861 corp: 10/20b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CopyPart- 00:06:06.769 [2024-11-26 19:32:41.941152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:06.769 [2024-11-26 19:32:41.941179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.769 #14 NEW cov: 12414 ft: 13903 corp: 11/22b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CrossOver- 00:06:06.769 [2024-11-26 19:32:42.001373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:06.769 [2024-11-26 19:32:42.001401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.769 #15 NEW cov: 12414 ft: 13984 corp: 12/24b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CrossOver- 00:06:06.769 [2024-11-26 19:32:42.071560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:06:06.769 [2024-11-26 19:32:42.071587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.028 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:07.028 #16 NEW cov: 12437 ft: 14064 corp: 13/27b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 ChangeByte- 00:06:07.028 [2024-11-26 19:32:42.132081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000825 cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.132108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.028 [2024-11-26 19:32:42.132221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.132238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.028 [2024-11-26 19:32:42.132341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.132357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.028 #17 NEW cov: 12437 ft: 14303 corp: 14/33b lim: 10 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:07.028 [2024-11-26 19:32:42.181792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a23 cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.181819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.028 #18 NEW cov: 12437 ft: 14347 corp: 15/35b lim: 10 exec/s: 18 rss: 74Mb L: 2/6 MS: 1 ChangeBit- 00:06:07.028 [2024-11-26 19:32:42.242552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.242579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.028 [2024-11-26 19:32:42.242699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009d9d cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.242714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.028 [2024-11-26 19:32:42.242825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009d9d cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.242840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.028 [2024-11-26 19:32:42.242952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009d0a cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.242967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.028 #19 NEW cov: 12437 ft: 14582 corp: 16/43b lim: 10 exec/s: 19 rss: 74Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:07.028 [2024-11-26 19:32:42.302109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c00a cdw11:00000000 00:06:07.028 [2024-11-26 19:32:42.302137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.028 #20 NEW cov: 12437 ft: 14592 corp: 17/45b lim: 10 exec/s: 20 rss: 74Mb L: 2/8 MS: 1 InsertByte- 00:06:07.287 [2024-11-26 19:32:42.352660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000825 cdw11:00000000 00:06:07.287 [2024-11-26 19:32:42.352686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.287 [2024-11-26 19:32:42.352799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a25 cdw11:00000000 00:06:07.287 [2024-11-26 19:32:42.352816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.287 [2024-11-26 19:32:42.352923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002588 cdw11:00000000 00:06:07.287 [2024-11-26 19:32:42.352939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.287 #21 NEW cov: 12437 ft: 14608 corp: 18/51b lim: 10 exec/s: 21 rss: 74Mb L: 6/8 MS: 1 CrossOver- 00:06:07.288 [2024-11-26 19:32:42.412427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:07.288 [2024-11-26 19:32:42.412453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.288 #22 NEW cov: 12437 ft: 14626 corp: 19/53b lim: 10 exec/s: 22 rss: 74Mb L: 2/8 MS: 1 CopyPart- 00:06:07.288 [2024-11-26 19:32:42.472591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a23 cdw11:00000000 00:06:07.288 [2024-11-26 19:32:42.472621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.288 #23 NEW cov: 12437 ft: 14679 corp: 20/56b lim: 10 exec/s: 23 rss: 74Mb L: 3/8 MS: 1 InsertByte- 00:06:07.288 [2024-11-26 19:32:42.532789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007ae7 cdw11:00000000 00:06:07.288 [2024-11-26 19:32:42.532815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.288 #24 NEW cov: 12437 ft: 14729 corp: 21/58b lim: 10 exec/s: 24 rss: 74Mb L: 2/8 MS: 1 ChangeByte- 00:06:07.288 [2024-11-26 19:32:42.573037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:06:07.288 [2024-11-26 19:32:42.573063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.288 [2024-11-26 19:32:42.573172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000080a cdw11:00000000 00:06:07.288 [2024-11-26 19:32:42.573192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.288 #25 NEW cov: 12437 ft: 14872 corp: 22/62b lim: 10 exec/s: 25 rss: 74Mb L: 4/8 MS: 1 CrossOver- 00:06:07.547 [2024-11-26 19:32:42.612946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005a02 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.612973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.547 #26 NEW cov: 12437 ft: 14874 corp: 23/64b lim: 10 exec/s: 26 rss: 74Mb L: 2/8 MS: 1 ChangeBinInt- 00:06:07.547 [2024-11-26 19:32:42.683700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002525 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.683730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.547 [2024-11-26 19:32:42.683859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.683878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.547 [2024-11-26 19:32:42.683997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000080a cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.684014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.547 #27 NEW cov: 12437 ft: 14882 corp: 24/70b lim: 10 exec/s: 27 rss: 74Mb L: 6/8 MS: 1 ShuffleBytes- 00:06:07.547 [2024-11-26 19:32:42.733605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.733635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.547 [2024-11-26 19:32:42.733745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000070a cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.733761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.547 #28 NEW cov: 12437 ft: 14962 corp: 25/74b lim: 10 exec/s: 28 rss: 74Mb L: 4/8 MS: 1 InsertByte- 00:06:07.547 [2024-11-26 19:32:42.783518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a3b cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.783544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.547 #29 NEW cov: 12437 ft: 14969 corp: 26/77b lim: 10 exec/s: 29 rss: 74Mb L: 3/8 MS: 1 InsertByte- 00:06:07.547 [2024-11-26 19:32:42.833910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000825 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.833939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.547 [2024-11-26 19:32:42.834053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:06:07.547 [2024-11-26 19:32:42.834071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.807 #30 NEW cov: 12437 ft: 15070 corp: 27/81b lim: 10 exec/s: 30 rss: 74Mb L: 4/8 MS: 1 CrossOver- 00:06:07.807 [2024-11-26 19:32:42.883870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.883899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.807 #31 NEW cov: 12437 ft: 15109 corp: 28/84b lim: 10 exec/s: 31 rss: 74Mb L: 3/8 MS: 1 CrossOver- 00:06:07.807 [2024-11-26 19:32:42.934045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a21 cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.934075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.807 #37 NEW cov: 12437 ft: 15139 corp: 29/86b lim: 10 exec/s: 37 rss: 74Mb L: 2/8 MS: 1 CopyPart- 00:06:07.807 [2024-11-26 19:32:42.984803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000cbcb cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.984830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.807 [2024-11-26 19:32:42.984942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000cbcb cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.984960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.807 [2024-11-26 19:32:42.985079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000cbcb cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.985095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.807 [2024-11-26 19:32:42.985205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000cbcb cdw11:00000000 00:06:07.807 [2024-11-26 19:32:42.985222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.807 #41 NEW cov: 12437 ft: 15170 corp: 30/95b lim: 10 exec/s: 41 rss: 74Mb L: 9/9 MS: 4 EraseBytes-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:06:07.807 [2024-11-26 19:32:43.034339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e08 cdw11:00000000 00:06:07.807 [2024-11-26 19:32:43.034365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.807 #42 NEW cov: 12437 ft: 15189 corp: 31/98b lim: 10 exec/s: 42 rss: 74Mb L: 3/9 MS: 1 InsertByte- 00:06:07.807 [2024-11-26 19:32:43.104612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a2c cdw11:00000000 00:06:07.807 [2024-11-26 19:32:43.104637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.066 #43 NEW cov: 12437 ft: 15194 corp: 32/100b lim: 10 exec/s: 43 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:06:08.066 [2024-11-26 19:32:43.164841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007ae7 cdw11:00000000 00:06:08.066 [2024-11-26 19:32:43.164867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.067 #44 NEW cov: 12437 ft: 15196 corp: 33/102b lim: 10 exec/s: 22 rss: 74Mb L: 2/9 MS: 1 CopyPart- 00:06:08.067 #44 DONE cov: 12437 ft: 15196 corp: 33/102b lim: 10 exec/s: 22 rss: 74Mb 00:06:08.067 Done 44 runs in 2 second(s) 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:08.067 19:32:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:08.067 [2024-11-26 19:32:43.337937] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:08.067 [2024-11-26 19:32:43.338009] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121629 ] 00:06:08.327 [2024-11-26 19:32:43.597135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.586 [2024-11-26 19:32:43.656120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.586 [2024-11-26 19:32:43.715102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.586 [2024-11-26 19:32:43.731437] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:08.586 INFO: Running with entropic power schedule (0xFF, 100). 00:06:08.586 INFO: Seed: 4138120525 00:06:08.586 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:08.586 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:08.586 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:08.586 INFO: A corpus is not provided, starting from an empty corpus 00:06:08.586 #2 INITED exec/s: 0 rss: 65Mb 00:06:08.586 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:08.586 This may also happen if the target rejected all inputs we tried so far 00:06:08.586 [2024-11-26 19:32:43.776816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:08.586 [2024-11-26 19:32:43.776845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.846 NEW_FUNC[1/715]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:08.846 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:08.846 #4 NEW cov: 12207 ft: 12207 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 CrossOver-CrossOver- 00:06:08.846 [2024-11-26 19:32:44.107628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ad4 cdw11:00000000 00:06:08.846 [2024-11-26 19:32:44.107666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.846 #5 NEW cov: 12323 ft: 12703 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:06:09.105 [2024-11-26 19:32:44.168047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.168075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.105 [2024-11-26 19:32:44.168132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.168146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.105 [2024-11-26 19:32:44.168197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.168212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.105 [2024-11-26 19:32:44.168263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.168276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.105 #6 NEW cov: 12329 ft: 13285 corp: 4/13b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:09.105 [2024-11-26 19:32:44.208132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.208159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.105 [2024-11-26 19:32:44.208211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.105 [2024-11-26 19:32:44.208225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.208278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.208291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.208344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.208357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.106 #7 NEW cov: 12414 ft: 13510 corp: 5/21b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:06:09.106 [2024-11-26 19:32:44.268418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.268445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.268497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.268510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.268561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.268575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.268625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.268639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.268691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000000d4 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.268704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.106 #8 NEW cov: 12414 ft: 13639 corp: 6/31b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:09.106 [2024-11-26 19:32:44.328236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.328265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.328319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fdff cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.328333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.106 #12 NEW cov: 12414 ft: 13885 corp: 7/35b lim: 10 exec/s: 0 rss: 73Mb L: 4/10 MS: 4 EraseBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:09.106 [2024-11-26 19:32:44.368735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.368762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.368816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.368830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.368882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.368895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.368946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.368961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.106 [2024-11-26 19:32:44.369012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.369027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.106 #13 NEW cov: 12414 ft: 13975 corp: 8/45b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:09.106 [2024-11-26 19:32:44.408338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.106 [2024-11-26 19:32:44.408364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.366 #16 NEW cov: 12414 ft: 14024 corp: 9/47b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 3 EraseBytes-CopyPart-CopyPart- 00:06:09.366 [2024-11-26 19:32:44.448895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.448921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.366 [2024-11-26 19:32:44.448974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.448988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.366 [2024-11-26 19:32:44.449037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.449051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.366 [2024-11-26 19:32:44.449102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.449115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.366 [2024-11-26 19:32:44.449165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000076a9 cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.449182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.366 #17 NEW cov: 12414 ft: 14111 corp: 10/57b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:09.366 [2024-11-26 19:32:44.508607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.508633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.366 #18 NEW cov: 12414 ft: 14185 corp: 11/60b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:06:09.366 [2024-11-26 19:32:44.569209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f9f5 cdw11:00000000 00:06:09.366 [2024-11-26 19:32:44.569235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.366 [2024-11-26 19:32:44.569288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.569302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.569351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.569365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.569417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.569431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.569482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000076a9 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.569496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.367 #19 NEW cov: 12414 ft: 14227 corp: 12/70b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:09.367 [2024-11-26 19:32:44.629426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.629453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.629504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.629518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.629571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.629584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.629639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d400 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.629653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.367 [2024-11-26 19:32:44.629704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000000d4 cdw11:00000000 00:06:09.367 [2024-11-26 19:32:44.629717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.367 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:09.367 #20 NEW cov: 12437 ft: 14269 corp: 13/80b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:09.627 [2024-11-26 19:32:44.689244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.689271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.689323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.689336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 #21 NEW cov: 12437 ft: 14306 corp: 14/85b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:06:09.627 [2024-11-26 19:32:44.749519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.749545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.749596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.749615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.749667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.749680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.627 #22 NEW cov: 12437 ft: 14468 corp: 15/92b lim: 10 exec/s: 22 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:09.627 [2024-11-26 19:32:44.789636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.789662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.789715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.789729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.789781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.789795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.627 #23 NEW cov: 12437 ft: 14526 corp: 16/98b lim: 10 exec/s: 23 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:06:09.627 [2024-11-26 19:32:44.850035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.850061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.850114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.850128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.850181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.850195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.850245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.850258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.850311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000076a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.850327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.627 #24 NEW cov: 12437 ft: 14541 corp: 17/108b lim: 10 exec/s: 24 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:06:09.627 [2024-11-26 19:32:44.889897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.889923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.889977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.889991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.890041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004aaf cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.890055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.627 #25 NEW cov: 12437 ft: 14569 corp: 18/115b lim: 10 exec/s: 25 rss: 74Mb L: 7/10 MS: 1 ChangeByte- 00:06:09.627 [2024-11-26 19:32:44.930233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000008a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.930258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.930311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a90a cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.930325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.930377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.930391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.930443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.930456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.627 [2024-11-26 19:32:44.930507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000076a9 cdw11:00000000 00:06:09.627 [2024-11-26 19:32:44.930522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.887 #26 NEW cov: 12437 ft: 14582 corp: 19/125b lim: 10 exec/s: 26 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:09.887 [2024-11-26 19:32:44.989992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000560a cdw11:00000000 00:06:09.887 [2024-11-26 19:32:44.990019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.887 #27 NEW cov: 12437 ft: 14593 corp: 20/127b lim: 10 exec/s: 27 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:06:09.887 [2024-11-26 19:32:45.050233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.050259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.050312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ef0a cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.050325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.887 #28 NEW cov: 12437 ft: 14606 corp: 21/132b lim: 10 exec/s: 28 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:06:09.887 [2024-11-26 19:32:45.110491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.110517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.110569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.110583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.110640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fdff cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.110654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.887 #29 NEW cov: 12437 ft: 14643 corp: 22/138b lim: 10 exec/s: 29 rss: 74Mb L: 6/10 MS: 1 CopyPart- 00:06:09.887 [2024-11-26 19:32:45.170917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.170943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.170996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.171010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.171063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.171077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.171128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.171141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.887 [2024-11-26 19:32:45.171194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d5d4 cdw11:00000000 00:06:09.887 [2024-11-26 19:32:45.171209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:09.887 #30 NEW cov: 12437 ft: 14659 corp: 23/148b lim: 10 exec/s: 30 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\325"- 00:06:10.147 [2024-11-26 19:32:45.210674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.210701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.210755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efd4 cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.210769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 #31 NEW cov: 12437 ft: 14662 corp: 24/153b lim: 10 exec/s: 31 rss: 75Mb L: 5/10 MS: 1 CrossOver- 00:06:10.147 [2024-11-26 19:32:45.270868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000af4a cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.270895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.270948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.270962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 #32 NEW cov: 12437 ft: 14680 corp: 25/157b lim: 10 exec/s: 32 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:06:10.147 [2024-11-26 19:32:45.330890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e30a cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.330916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 #33 NEW cov: 12437 ft: 14742 corp: 26/159b lim: 10 exec/s: 33 rss: 75Mb L: 2/10 MS: 1 ChangeByte- 00:06:10.147 [2024-11-26 19:32:45.371263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.371288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.371340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.371353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.371403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.371417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.147 #34 NEW cov: 12437 ft: 14825 corp: 27/166b lim: 10 exec/s: 34 rss: 75Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:10.147 [2024-11-26 19:32:45.411468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.411494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.411547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.411560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.411613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.411627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.411678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000003d5 cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.411691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.147 #37 NEW cov: 12437 ft: 14841 corp: 28/175b lim: 10 exec/s: 37 rss: 75Mb L: 9/10 MS: 3 ShuffleBytes-ChangeByte-PersAutoDict- DE: "\001\000\000\000\000\000\003\325"- 00:06:10.147 [2024-11-26 19:32:45.451472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.451497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.451550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.451564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 [2024-11-26 19:32:45.451617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:10.147 [2024-11-26 19:32:45.451631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.406 #38 NEW cov: 12437 ft: 14850 corp: 29/182b lim: 10 exec/s: 38 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:10.406 [2024-11-26 19:32:45.511677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:10.406 [2024-11-26 19:32:45.511702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.406 [2024-11-26 19:32:45.511761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:10.406 [2024-11-26 19:32:45.511775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.406 [2024-11-26 19:32:45.511828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000caaf cdw11:00000000 00:06:10.406 [2024-11-26 19:32:45.511842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.406 #39 NEW cov: 12437 ft: 14854 corp: 30/189b lim: 10 exec/s: 39 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:06:10.406 [2024-11-26 19:32:45.551506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d40a cdw11:00000000 00:06:10.406 [2024-11-26 19:32:45.551531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.407 #40 NEW cov: 12437 ft: 14864 corp: 31/191b lim: 10 exec/s: 40 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:10.407 [2024-11-26 19:32:45.591607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e3fd cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.591632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.407 #41 NEW cov: 12437 ft: 14869 corp: 32/193b lim: 10 exec/s: 41 rss: 75Mb L: 2/10 MS: 1 CrossOver- 00:06:10.407 [2024-11-26 19:32:45.652080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.652107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.407 [2024-11-26 19:32:45.652158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000af4a cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.652173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.407 [2024-11-26 19:32:45.652222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.652235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.407 #42 NEW cov: 12437 ft: 14885 corp: 33/200b lim: 10 exec/s: 42 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:10.407 [2024-11-26 19:32:45.692318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.692346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.407 [2024-11-26 19:32:45.692397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.692411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.407 [2024-11-26 19:32:45.692463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fdfd cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.692478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.407 [2024-11-26 19:32:45.692528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fdff cdw11:00000000 00:06:10.407 [2024-11-26 19:32:45.692542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.666 #43 NEW cov: 12437 ft: 14895 corp: 34/208b lim: 10 exec/s: 43 rss: 75Mb L: 8/10 MS: 1 CopyPart- 00:06:10.666 [2024-11-26 19:32:45.752226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000001f cdw11:00000000 00:06:10.666 [2024-11-26 19:32:45.752256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.666 [2024-11-26 19:32:45.752306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ad4 cdw11:00000000 00:06:10.666 [2024-11-26 19:32:45.752320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.666 #44 NEW cov: 12437 ft: 14899 corp: 35/212b lim: 10 exec/s: 22 rss: 75Mb L: 4/10 MS: 1 CMP- DE: "\000\037"- 00:06:10.666 #44 DONE cov: 12437 ft: 14899 corp: 35/212b lim: 10 exec/s: 22 rss: 75Mb 00:06:10.666 ###### Recommended dictionary. ###### 00:06:10.666 "\001\000\000\000\000\000\003\325" # Uses: 1 00:06:10.666 "\000\037" # Uses: 0 00:06:10.666 ###### End of recommended dictionary. ###### 00:06:10.666 Done 44 runs in 2 second(s) 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:10.666 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:10.667 19:32:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:10.667 [2024-11-26 19:32:45.922700] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:10.667 [2024-11-26 19:32:45.922770] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122038 ] 00:06:10.926 [2024-11-26 19:32:46.184137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.185 [2024-11-26 19:32:46.237783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.185 [2024-11-26 19:32:46.296660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.185 [2024-11-26 19:32:46.313012] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:11.185 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.185 INFO: Seed: 2425163338 00:06:11.185 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:11.185 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:11.185 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:11.185 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.185 [2024-11-26 19:32:46.358380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.185 [2024-11-26 19:32:46.358409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.185 #2 INITED cov: 12237 ft: 12223 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:11.185 [2024-11-26 19:32:46.398515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.185 [2024-11-26 19:32:46.398543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.185 [2024-11-26 19:32:46.398605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.185 [2024-11-26 19:32:46.398619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.185 #3 NEW cov: 12350 ft: 13477 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:06:11.186 [2024-11-26 19:32:46.458741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.186 [2024-11-26 19:32:46.458767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.186 [2024-11-26 19:32:46.458824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.186 [2024-11-26 19:32:46.458838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.186 #4 NEW cov: 12356 ft: 13627 corp: 3/5b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:06:11.445 [2024-11-26 19:32:46.498615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.498642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.445 #5 NEW cov: 12441 ft: 13881 corp: 4/6b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeByte- 00:06:11.445 [2024-11-26 19:32:46.538944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.538969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.445 [2024-11-26 19:32:46.539028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.539043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.445 #6 NEW cov: 12441 ft: 13997 corp: 5/8b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:06:11.445 [2024-11-26 19:32:46.579032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.579057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.445 [2024-11-26 19:32:46.579115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.579132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.445 #7 NEW cov: 12441 ft: 14180 corp: 6/10b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:11.445 [2024-11-26 19:32:46.639170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.639196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.445 [2024-11-26 19:32:46.639253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.639268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.445 #8 NEW cov: 12441 ft: 14251 corp: 7/12b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:06:11.445 [2024-11-26 19:32:46.699394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.699420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.445 [2024-11-26 19:32:46.699476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.699491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.445 #9 NEW cov: 12441 ft: 14317 corp: 8/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:06:11.445 [2024-11-26 19:32:46.739324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.445 [2024-11-26 19:32:46.739350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.705 #10 NEW cov: 12441 ft: 14343 corp: 9/15b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 EraseBytes- 00:06:11.705 [2024-11-26 19:32:46.779453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.779478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.705 #11 NEW cov: 12441 ft: 14415 corp: 10/16b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeBit- 00:06:11.705 [2024-11-26 19:32:46.820025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.820050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.705 [2024-11-26 19:32:46.820108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.820122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.705 [2024-11-26 19:32:46.820179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.820194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.705 [2024-11-26 19:32:46.820250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.820264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.705 #12 NEW cov: 12441 ft: 14767 corp: 11/20b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:11.705 [2024-11-26 19:32:46.879745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.705 [2024-11-26 19:32:46.879771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.705 #13 NEW cov: 12441 ft: 14803 corp: 12/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 EraseBytes- 00:06:11.706 [2024-11-26 19:32:46.920494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.920520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.706 [2024-11-26 19:32:46.920579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.920593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.706 [2024-11-26 19:32:46.920653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.920667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.706 [2024-11-26 19:32:46.920722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.920736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.706 [2024-11-26 19:32:46.920791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.920806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:11.706 #14 NEW cov: 12441 ft: 14861 corp: 13/26b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:11.706 [2024-11-26 19:32:46.980045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.706 [2024-11-26 19:32:46.980072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.965 #15 NEW cov: 12441 ft: 14936 corp: 14/27b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:11.965 [2024-11-26 19:32:47.040384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.040410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.965 [2024-11-26 19:32:47.040466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.040480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.965 #16 NEW cov: 12441 ft: 14955 corp: 15/29b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:06:11.965 [2024-11-26 19:32:47.100389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.100416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.965 #17 NEW cov: 12441 ft: 15013 corp: 16/30b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:11.965 [2024-11-26 19:32:47.160728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.160755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.965 [2024-11-26 19:32:47.160813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.160827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.965 #18 NEW cov: 12441 ft: 15028 corp: 17/32b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:06:11.965 [2024-11-26 19:32:47.201012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.201038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.965 [2024-11-26 19:32:47.201098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.201113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.965 [2024-11-26 19:32:47.201168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.201182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.965 #19 NEW cov: 12441 ft: 15217 corp: 18/35b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:11.965 [2024-11-26 19:32:47.240786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.965 [2024-11-26 19:32:47.240813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.483 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:12.483 #20 NEW cov: 12464 ft: 15267 corp: 19/36b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:06:12.483 [2024-11-26 19:32:47.551698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.483 [2024-11-26 19:32:47.551730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.483 #21 NEW cov: 12464 ft: 15296 corp: 20/37b lim: 5 exec/s: 21 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:12.483 [2024-11-26 19:32:47.591928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.483 [2024-11-26 19:32:47.591955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.483 [2024-11-26 19:32:47.592017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.483 [2024-11-26 19:32:47.592032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.483 #22 NEW cov: 12464 ft: 15341 corp: 21/39b lim: 5 exec/s: 22 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:06:12.483 [2024-11-26 19:32:47.631888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.483 [2024-11-26 19:32:47.631915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.483 #23 NEW cov: 12464 ft: 15432 corp: 22/40b lim: 5 exec/s: 23 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:06:12.483 [2024-11-26 19:32:47.672454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.672482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.484 [2024-11-26 19:32:47.672547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.672562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.484 [2024-11-26 19:32:47.672624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.672639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.484 [2024-11-26 19:32:47.672699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.672713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.484 #24 NEW cov: 12464 ft: 15452 corp: 23/44b lim: 5 exec/s: 24 rss: 74Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:12.484 [2024-11-26 19:32:47.712273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.712299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.484 [2024-11-26 19:32:47.712362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.712375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.484 #25 NEW cov: 12464 ft: 15522 corp: 24/46b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:12.484 [2024-11-26 19:32:47.772451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.772477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.484 [2024-11-26 19:32:47.772539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.484 [2024-11-26 19:32:47.772554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.744 #26 NEW cov: 12464 ft: 15546 corp: 25/48b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:12.744 [2024-11-26 19:32:47.832629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.832656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.832719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.832733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.744 #27 NEW cov: 12464 ft: 15635 corp: 26/50b lim: 5 exec/s: 27 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:12.744 [2024-11-26 19:32:47.872543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.872568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.744 #28 NEW cov: 12464 ft: 15703 corp: 27/51b lim: 5 exec/s: 28 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:12.744 [2024-11-26 19:32:47.913174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.913201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.913266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.913280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.913340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.913355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.913413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.913427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.744 #29 NEW cov: 12464 ft: 15765 corp: 28/55b lim: 5 exec/s: 29 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:12.744 [2024-11-26 19:32:47.973213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.973239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.973302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.973316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:47.973378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:47.973391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.744 #30 NEW cov: 12464 ft: 15789 corp: 29/58b lim: 5 exec/s: 30 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:12.744 [2024-11-26 19:32:48.013471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:48.013497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:48.013562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:48.013576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:48.013638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:48.013652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.744 [2024-11-26 19:32:48.013718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.744 [2024-11-26 19:32:48.013732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.004 #31 NEW cov: 12464 ft: 15871 corp: 30/62b lim: 5 exec/s: 31 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:06:13.004 [2024-11-26 19:32:48.073823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.073849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.073910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.073924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.073988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.074002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.074063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.074077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.074136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.074151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.004 #32 NEW cov: 12464 ft: 15875 corp: 31/67b lim: 5 exec/s: 32 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:06:13.004 [2024-11-26 19:32:48.133312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.133339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.004 #33 NEW cov: 12464 ft: 15886 corp: 32/68b lim: 5 exec/s: 33 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:13.004 [2024-11-26 19:32:48.173883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.173909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.173969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.173983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.174043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.174056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.174116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.174134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.004 #34 NEW cov: 12464 ft: 15901 corp: 33/72b lim: 5 exec/s: 34 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:13.004 [2024-11-26 19:32:48.234095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.234123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.234186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.234200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.234264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.234278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.234338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.234353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.004 #35 NEW cov: 12464 ft: 15923 corp: 34/76b lim: 5 exec/s: 35 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:13.004 [2024-11-26 19:32:48.274188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.274214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.274277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.274292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.274351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.274365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.004 [2024-11-26 19:32:48.274425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.004 [2024-11-26 19:32:48.274439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.264 #36 NEW cov: 12464 ft: 15925 corp: 35/80b lim: 5 exec/s: 36 rss: 76Mb L: 4/5 MS: 1 InsertByte- 00:06:13.264 [2024-11-26 19:32:48.333832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.264 [2024-11-26 19:32:48.333857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.264 #37 NEW cov: 12464 ft: 15957 corp: 36/81b lim: 5 exec/s: 18 rss: 76Mb L: 1/5 MS: 1 CrossOver- 00:06:13.264 #37 DONE cov: 12464 ft: 15957 corp: 36/81b lim: 5 exec/s: 18 rss: 76Mb 00:06:13.264 Done 37 runs in 2 second(s) 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.264 19:32:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:13.264 [2024-11-26 19:32:48.507032] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:13.264 [2024-11-26 19:32:48.507103] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122453 ] 00:06:13.524 [2024-11-26 19:32:48.765045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.524 [2024-11-26 19:32:48.819381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.783 [2024-11-26 19:32:48.878250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.783 [2024-11-26 19:32:48.894589] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:13.783 INFO: Running with entropic power schedule (0xFF, 100). 00:06:13.783 INFO: Seed: 710188167 00:06:13.783 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:13.783 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:13.783 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:13.783 INFO: A corpus is not provided, starting from an empty corpus 00:06:13.783 [2024-11-26 19:32:48.964884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.783 [2024-11-26 19:32:48.964921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.783 #2 INITED cov: 12237 ft: 12236 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:13.783 [2024-11-26 19:32:49.015148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.783 [2024-11-26 19:32:49.015177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.783 #3 NEW cov: 12350 ft: 12660 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:13.783 [2024-11-26 19:32:49.085560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.783 [2024-11-26 19:32:49.085588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.042 #4 NEW cov: 12356 ft: 13038 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:06:14.042 [2024-11-26 19:32:49.156483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.156512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.156581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.156603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.156679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.156695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.042 #5 NEW cov: 12441 ft: 13956 corp: 4/6b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CMP- DE: "\001\010"- 00:06:14.042 [2024-11-26 19:32:49.216193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.216223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.042 #6 NEW cov: 12441 ft: 14010 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeBit- 00:06:14.042 [2024-11-26 19:32:49.267826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.267853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.267928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.267942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.268027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.268043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.268114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.268128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.268201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.268215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:14.042 #7 NEW cov: 12441 ft: 14556 corp: 6/12b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:14.042 [2024-11-26 19:32:49.337261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.337292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.042 [2024-11-26 19:32:49.337372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.042 [2024-11-26 19:32:49.337387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.301 #8 NEW cov: 12441 ft: 14790 corp: 7/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:14.301 [2024-11-26 19:32:49.407224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.407251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.301 #9 NEW cov: 12441 ft: 14867 corp: 8/15b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:14.301 [2024-11-26 19:32:49.479146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.479173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.479259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.479275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.479346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.479362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.479436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.479451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.479529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.479544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:14.301 #10 NEW cov: 12441 ft: 14894 corp: 9/20b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:14.301 [2024-11-26 19:32:49.537875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.537903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.301 #11 NEW cov: 12441 ft: 14939 corp: 10/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:14.301 [2024-11-26 19:32:49.589533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.589561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.589636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.589651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.589733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.589749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.589815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.589830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.301 [2024-11-26 19:32:49.589900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.301 [2024-11-26 19:32:49.589916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:14.561 #12 NEW cov: 12441 ft: 14970 corp: 11/26b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CMP- DE: "\007\000\000\000"- 00:06:14.561 [2024-11-26 19:32:49.639557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.639583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.639654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.639670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.639742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.639755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.639828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.639842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.561 #13 NEW cov: 12441 ft: 14998 corp: 12/30b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:14.561 [2024-11-26 19:32:49.709352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.709378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.709447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.709461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.709541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.709555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.561 #14 NEW cov: 12441 ft: 15029 corp: 13/33b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 PersAutoDict- DE: "\001\010"- 00:06:14.561 [2024-11-26 19:32:49.759521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.759550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.759624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.759641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.561 #15 NEW cov: 12441 ft: 15056 corp: 14/35b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:14.561 [2024-11-26 19:32:49.809954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.809980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.561 [2024-11-26 19:32:49.810053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.561 [2024-11-26 19:32:49.810068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.822 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:14.822 #16 NEW cov: 12464 ft: 15098 corp: 15/37b lim: 5 exec/s: 16 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:14.822 [2024-11-26 19:32:50.120321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.822 [2024-11-26 19:32:50.120361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.822 [2024-11-26 19:32:50.120488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.822 [2024-11-26 19:32:50.120506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.822 [2024-11-26 19:32:50.120637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.822 [2024-11-26 19:32:50.120656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.081 #17 NEW cov: 12464 ft: 15268 corp: 16/40b lim: 5 exec/s: 17 rss: 73Mb L: 3/5 MS: 1 CMP- DE: "\001\010"- 00:06:15.081 [2024-11-26 19:32:50.169757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.169786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.081 #18 NEW cov: 12464 ft: 15480 corp: 17/41b lim: 5 exec/s: 18 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:15.081 [2024-11-26 19:32:50.219844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.219873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.081 #19 NEW cov: 12464 ft: 15568 corp: 18/42b lim: 5 exec/s: 19 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:15.081 [2024-11-26 19:32:50.290370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.290399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.081 [2024-11-26 19:32:50.290511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.290531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.081 #20 NEW cov: 12464 ft: 15608 corp: 19/44b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:06:15.081 [2024-11-26 19:32:50.360864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.360892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.081 [2024-11-26 19:32:50.361008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.361025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.081 [2024-11-26 19:32:50.361154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.081 [2024-11-26 19:32:50.361171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.341 #21 NEW cov: 12464 ft: 15632 corp: 20/47b lim: 5 exec/s: 21 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:06:15.341 [2024-11-26 19:32:50.431134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.431162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.431298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.431318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.431441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.431460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.341 #22 NEW cov: 12464 ft: 15648 corp: 21/50b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:06:15.341 [2024-11-26 19:32:50.501934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.501963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.502081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.502098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.502226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.502243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.502355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.502372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.502492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.502511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:15.341 #23 NEW cov: 12464 ft: 15657 corp: 22/55b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:15.341 [2024-11-26 19:32:50.551218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.551247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.341 [2024-11-26 19:32:50.551367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.551383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.341 #24 NEW cov: 12464 ft: 15732 corp: 23/57b lim: 5 exec/s: 24 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:06:15.341 [2024-11-26 19:32:50.621133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.341 [2024-11-26 19:32:50.621163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.341 #25 NEW cov: 12464 ft: 15749 corp: 24/58b lim: 5 exec/s: 25 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:15.601 [2024-11-26 19:32:50.671639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.671667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.601 [2024-11-26 19:32:50.671818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.671836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.601 #26 NEW cov: 12464 ft: 15784 corp: 25/60b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:15.601 [2024-11-26 19:32:50.741451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.741480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.601 #27 NEW cov: 12464 ft: 15794 corp: 26/61b lim: 5 exec/s: 27 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:06:15.601 [2024-11-26 19:32:50.811713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.811743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.601 #28 NEW cov: 12464 ft: 15823 corp: 27/62b lim: 5 exec/s: 28 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:15.601 [2024-11-26 19:32:50.862161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.862190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.601 [2024-11-26 19:32:50.862314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.601 [2024-11-26 19:32:50.862331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.601 #29 NEW cov: 12464 ft: 15827 corp: 28/64b lim: 5 exec/s: 29 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:15.861 [2024-11-26 19:32:50.912104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.861 [2024-11-26 19:32:50.912132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.861 #30 NEW cov: 12464 ft: 15855 corp: 29/65b lim: 5 exec/s: 15 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:15.861 #30 DONE cov: 12464 ft: 15855 corp: 29/65b lim: 5 exec/s: 15 rss: 74Mb 00:06:15.861 ###### Recommended dictionary. ###### 00:06:15.861 "\001\010" # Uses: 1 00:06:15.861 "\007\000\000\000" # Uses: 0 00:06:15.861 ###### End of recommended dictionary. ###### 00:06:15.861 Done 30 runs in 2 second(s) 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:15.861 19:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:15.861 [2024-11-26 19:32:51.097089] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:15.861 [2024-11-26 19:32:51.097155] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122983 ] 00:06:16.121 [2024-11-26 19:32:51.354482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.121 [2024-11-26 19:32:51.417414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.381 [2024-11-26 19:32:51.476608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.381 [2024-11-26 19:32:51.492957] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:16.381 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.381 INFO: Seed: 3311179749 00:06:16.381 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:16.381 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:16.381 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:16.381 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.381 #2 INITED exec/s: 0 rss: 65Mb 00:06:16.381 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.381 This may also happen if the target rejected all inputs we tried so far 00:06:16.381 [2024-11-26 19:32:51.538345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.381 [2024-11-26 19:32:51.538373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.640 NEW_FUNC[1/716]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:16.640 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:16.640 #5 NEW cov: 12260 ft: 12243 corp: 2/12b lim: 40 exec/s: 0 rss: 72Mb L: 11/11 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:16.640 [2024-11-26 19:32:51.869160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.640 [2024-11-26 19:32:51.869194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.640 #6 NEW cov: 12373 ft: 12834 corp: 3/23b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ShuffleBytes- 00:06:16.640 [2024-11-26 19:32:51.929256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.640 [2024-11-26 19:32:51.929283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 #9 NEW cov: 12379 ft: 13199 corp: 4/38b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:06:16.900 [2024-11-26 19:32:51.969585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:51.969616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:51.969681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:51.969696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:51.969755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:51.969769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.900 #10 NEW cov: 12464 ft: 13768 corp: 5/68b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:16.900 [2024-11-26 19:32:52.009574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.009606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.009669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.009683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.900 #11 NEW cov: 12464 ft: 14011 corp: 6/87b lim: 40 exec/s: 0 rss: 73Mb L: 19/30 MS: 1 CrossOver- 00:06:16.900 [2024-11-26 19:32:52.069779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.069810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.069874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.069889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.900 #12 NEW cov: 12464 ft: 14065 corp: 7/106b lim: 40 exec/s: 0 rss: 73Mb L: 19/30 MS: 1 ChangeBit- 00:06:16.900 [2024-11-26 19:32:52.130078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.130105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.130170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.130184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.130248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.130263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.900 #13 NEW cov: 12464 ft: 14118 corp: 8/136b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:16.900 [2024-11-26 19:32:52.190214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.190240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.190303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.190317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.900 [2024-11-26 19:32:52.190377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e0e00000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.900 [2024-11-26 19:32:52.190391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.159 #14 NEW cov: 12464 ft: 14149 corp: 9/166b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.159 [2024-11-26 19:32:52.230326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.159 [2024-11-26 19:32:52.230352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.159 [2024-11-26 19:32:52.230416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.159 [2024-11-26 19:32:52.230431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.159 [2024-11-26 19:32:52.230493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00e05ee0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.159 [2024-11-26 19:32:52.230507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.159 #15 NEW cov: 12464 ft: 14175 corp: 10/196b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeByte- 00:06:17.159 [2024-11-26 19:32:52.290243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.159 [2024-11-26 19:32:52.290270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.159 #16 NEW cov: 12464 ft: 14263 corp: 11/211b lim: 40 exec/s: 0 rss: 73Mb L: 15/30 MS: 1 ShuffleBytes- 00:06:17.159 [2024-11-26 19:32:52.330314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40405040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.330339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.160 #17 NEW cov: 12464 ft: 14347 corp: 12/222b lim: 40 exec/s: 0 rss: 73Mb L: 11/30 MS: 1 ChangeBit- 00:06:17.160 [2024-11-26 19:32:52.390912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.390937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.160 [2024-11-26 19:32:52.391001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.391015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.160 [2024-11-26 19:32:52.391074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.391089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.160 [2024-11-26 19:32:52.391165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.391179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.160 #18 NEW cov: 12464 ft: 14825 corp: 13/256b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:17.160 [2024-11-26 19:32:52.430720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.430746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.160 [2024-11-26 19:32:52.430810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.160 [2024-11-26 19:32:52.430824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.160 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:17.160 #19 NEW cov: 12487 ft: 14877 corp: 14/276b lim: 40 exec/s: 0 rss: 73Mb L: 20/34 MS: 1 InsertByte- 00:06:17.419 [2024-11-26 19:32:52.470995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.471021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.419 [2024-11-26 19:32:52.471081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:40404000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.471095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.419 [2024-11-26 19:32:52.471161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000040 cdw11:003d000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.471175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.419 #20 NEW cov: 12487 ft: 14963 corp: 15/300b lim: 40 exec/s: 0 rss: 74Mb L: 24/34 MS: 1 CrossOver- 00:06:17.419 [2024-11-26 19:32:52.530890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.530916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.419 #21 NEW cov: 12487 ft: 14980 corp: 16/311b lim: 40 exec/s: 21 rss: 74Mb L: 11/34 MS: 1 CrossOver- 00:06:17.419 [2024-11-26 19:32:52.571243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.571269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.419 [2024-11-26 19:32:52.571333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.419 [2024-11-26 19:32:52.571347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.420 [2024-11-26 19:32:52.571409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.571423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.420 #22 NEW cov: 12487 ft: 15004 corp: 17/338b lim: 40 exec/s: 22 rss: 74Mb L: 27/34 MS: 1 InsertRepeatedBytes- 00:06:17.420 [2024-11-26 19:32:52.611244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:50404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.611270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.420 [2024-11-26 19:32:52.611332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:40404050 cdw11:404040a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.611346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.420 #23 NEW cov: 12487 ft: 15064 corp: 18/354b lim: 40 exec/s: 23 rss: 74Mb L: 16/34 MS: 1 CopyPart- 00:06:17.420 [2024-11-26 19:32:52.671285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.671311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.420 #24 NEW cov: 12487 ft: 15077 corp: 19/365b lim: 40 exec/s: 24 rss: 74Mb L: 11/34 MS: 1 CopyPart- 00:06:17.420 [2024-11-26 19:32:52.711673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e078e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.711699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.420 [2024-11-26 19:32:52.711761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e00000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.711775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.420 [2024-11-26 19:32:52.711847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.420 [2024-11-26 19:32:52.711862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.679 #25 NEW cov: 12487 ft: 15084 corp: 20/396b lim: 40 exec/s: 25 rss: 74Mb L: 31/34 MS: 1 InsertByte- 00:06:17.679 [2024-11-26 19:32:52.751520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:50404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.751546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.679 #26 NEW cov: 12487 ft: 15121 corp: 21/407b lim: 40 exec/s: 26 rss: 74Mb L: 11/34 MS: 1 ChangeBit- 00:06:17.679 [2024-11-26 19:32:52.791910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e078e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.791935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.679 [2024-11-26 19:32:52.791999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e00000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.792013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.679 [2024-11-26 19:32:52.792074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.792088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.679 #27 NEW cov: 12487 ft: 15144 corp: 22/438b lim: 40 exec/s: 27 rss: 74Mb L: 31/34 MS: 1 ChangeBit- 00:06:17.679 [2024-11-26 19:32:52.851771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.851797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.679 #28 NEW cov: 12487 ft: 15158 corp: 23/449b lim: 40 exec/s: 28 rss: 74Mb L: 11/34 MS: 1 CrossOver- 00:06:17.679 [2024-11-26 19:32:52.891898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.891924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.679 #29 NEW cov: 12487 ft: 15247 corp: 24/460b lim: 40 exec/s: 29 rss: 74Mb L: 11/34 MS: 1 CopyPart- 00:06:17.679 [2024-11-26 19:32:52.932030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.932056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.679 #31 NEW cov: 12487 ft: 15258 corp: 25/471b lim: 40 exec/s: 31 rss: 74Mb L: 11/34 MS: 2 EraseBytes-CMP- DE: "\000\000\002\000"- 00:06:17.679 [2024-11-26 19:32:52.972168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.679 [2024-11-26 19:32:52.972210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.938 #32 NEW cov: 12487 ft: 15289 corp: 26/483b lim: 40 exec/s: 32 rss: 74Mb L: 12/34 MS: 1 CopyPart- 00:06:17.938 [2024-11-26 19:32:53.032439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404050 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.032465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.938 [2024-11-26 19:32:53.032531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:40405040 cdw11:4040a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.032545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.938 #33 NEW cov: 12487 ft: 15296 corp: 27/499b lim: 40 exec/s: 33 rss: 74Mb L: 16/34 MS: 1 CopyPart- 00:06:17.938 [2024-11-26 19:32:53.092467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.092493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.938 #34 NEW cov: 12487 ft: 15299 corp: 28/510b lim: 40 exec/s: 34 rss: 74Mb L: 11/34 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.938 [2024-11-26 19:32:53.152920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.152945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.938 [2024-11-26 19:32:53.153009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.153023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.938 [2024-11-26 19:32:53.153088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.938 [2024-11-26 19:32:53.153102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.938 #35 NEW cov: 12487 ft: 15311 corp: 29/538b lim: 40 exec/s: 35 rss: 74Mb L: 28/34 MS: 1 InsertByte- 00:06:17.939 [2024-11-26 19:32:53.212818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.939 [2024-11-26 19:32:53.212844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.939 #36 NEW cov: 12487 ft: 15315 corp: 30/549b lim: 40 exec/s: 36 rss: 74Mb L: 11/34 MS: 1 ChangeByte- 00:06:18.198 [2024-11-26 19:32:53.253341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.253367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.253433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.253448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.253510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.253524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.253587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000031 cdw11:00404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.253606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.198 #37 NEW cov: 12487 ft: 15340 corp: 31/584b lim: 40 exec/s: 37 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:18.198 [2024-11-26 19:32:53.313073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40404000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.313100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.198 #38 NEW cov: 12487 ft: 15392 corp: 32/595b lim: 40 exec/s: 38 rss: 74Mb L: 11/35 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:06:18.198 [2024-11-26 19:32:53.353274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:40404040 cdw11:40402d40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.353303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.198 #39 NEW cov: 12487 ft: 15466 corp: 33/606b lim: 40 exec/s: 39 rss: 74Mb L: 11/35 MS: 1 ChangeByte- 00:06:18.198 [2024-11-26 19:32:53.413800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e078e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.413826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.413893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e00000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.413906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.413970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:40400000 cdw11:00e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.413984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.414047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.414061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.198 #40 NEW cov: 12487 ft: 15475 corp: 34/640b lim: 40 exec/s: 40 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:06:18.198 [2024-11-26 19:32:53.453914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e078e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.453940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.454006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e00040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.454021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.198 [2024-11-26 19:32:53.454082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.198 [2024-11-26 19:32:53.454096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.199 [2024-11-26 19:32:53.454157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e0e0e0e0 cdw11:e0e8e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.199 [2024-11-26 19:32:53.454171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.199 #41 NEW cov: 12487 ft: 15513 corp: 35/673b lim: 40 exec/s: 41 rss: 76Mb L: 33/35 MS: 1 CrossOver- 00:06:18.458 [2024-11-26 19:32:53.514104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.458 [2024-11-26 19:32:53.514134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.459 [2024-11-26 19:32:53.514201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.459 [2024-11-26 19:32:53.514215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.459 [2024-11-26 19:32:53.514278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.459 [2024-11-26 19:32:53.514292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.459 [2024-11-26 19:32:53.514356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e09a9a9a cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.459 [2024-11-26 19:32:53.514369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.459 #42 NEW cov: 12487 ft: 15518 corp: 36/706b lim: 40 exec/s: 21 rss: 76Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:18.459 #42 DONE cov: 12487 ft: 15518 corp: 36/706b lim: 40 exec/s: 21 rss: 76Mb 00:06:18.459 ###### Recommended dictionary. ###### 00:06:18.459 "\000\000\000\000\000\000\000\000" # Uses: 2 00:06:18.459 "\000\000\002\000" # Uses: 1 00:06:18.459 ###### End of recommended dictionary. ###### 00:06:18.459 Done 42 runs in 2 second(s) 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.459 19:32:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:18.459 [2024-11-26 19:32:53.685528] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:18.459 [2024-11-26 19:32:53.685610] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123439 ] 00:06:18.719 [2024-11-26 19:32:53.941666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.719 [2024-11-26 19:32:54.000444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.978 [2024-11-26 19:32:54.059552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.978 [2024-11-26 19:32:54.075901] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:18.978 INFO: Running with entropic power schedule (0xFF, 100). 00:06:18.978 INFO: Seed: 1598220938 00:06:18.978 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:18.978 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:18.978 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:18.978 INFO: A corpus is not provided, starting from an empty corpus 00:06:18.978 #2 INITED exec/s: 0 rss: 65Mb 00:06:18.978 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:18.978 This may also happen if the target rejected all inputs we tried so far 00:06:18.978 [2024-11-26 19:32:54.121580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.978 [2024-11-26 19:32:54.121613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.978 [2024-11-26 19:32:54.121674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.978 [2024-11-26 19:32:54.121689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.978 [2024-11-26 19:32:54.121746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.978 [2024-11-26 19:32:54.121760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.237 NEW_FUNC[1/717]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:19.237 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.237 #16 NEW cov: 12272 ft: 12271 corp: 2/28b lim: 40 exec/s: 0 rss: 72Mb L: 27/27 MS: 4 CopyPart-ChangeByte-CopyPart-InsertRepeatedBytes- 00:06:19.237 [2024-11-26 19:32:54.452273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.237 [2024-11-26 19:32:54.452313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.237 [2024-11-26 19:32:54.452380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.237 [2024-11-26 19:32:54.452398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.237 #19 NEW cov: 12385 ft: 13124 corp: 3/47b lim: 40 exec/s: 0 rss: 73Mb L: 19/27 MS: 3 ShuffleBytes-ChangeByte-CrossOver- 00:06:19.237 [2024-11-26 19:32:54.492431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.237 [2024-11-26 19:32:54.492458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.237 [2024-11-26 19:32:54.492516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.237 [2024-11-26 19:32:54.492533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.237 [2024-11-26 19:32:54.492592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.237 [2024-11-26 19:32:54.492620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.237 #20 NEW cov: 12391 ft: 13328 corp: 4/75b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 InsertByte- 00:06:19.497 [2024-11-26 19:32:54.552594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.552624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.552685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff01ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.552699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.552758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.552771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.497 #21 NEW cov: 12476 ft: 13629 corp: 5/103b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 ChangeBinInt- 00:06:19.497 [2024-11-26 19:32:54.612772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.612797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.612854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.612868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.612922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.612936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.497 #22 NEW cov: 12476 ft: 13727 corp: 6/130b lim: 40 exec/s: 0 rss: 73Mb L: 27/28 MS: 1 ShuffleBytes- 00:06:19.497 [2024-11-26 19:32:54.653062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.653089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.653147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.653161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.653218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.653232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.497 [2024-11-26 19:32:54.653289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.653306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.497 #27 NEW cov: 12476 ft: 14157 corp: 7/167b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 5 ChangeByte-CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:06:19.497 [2024-11-26 19:32:54.693147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.497 [2024-11-26 19:32:54.693173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.498 [2024-11-26 19:32:54.693231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.693245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.498 [2024-11-26 19:32:54.693305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff72 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.693319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.498 [2024-11-26 19:32:54.693371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:7272ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.693385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.498 #28 NEW cov: 12476 ft: 14291 corp: 8/201b lim: 40 exec/s: 0 rss: 73Mb L: 34/37 MS: 1 InsertRepeatedBytes- 00:06:19.498 [2024-11-26 19:32:54.753142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.753168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.498 [2024-11-26 19:32:54.753225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.753239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.498 [2024-11-26 19:32:54.753296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.498 [2024-11-26 19:32:54.753310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.498 #29 NEW cov: 12476 ft: 14374 corp: 9/230b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 1 CopyPart- 00:06:19.758 [2024-11-26 19:32:54.813169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.813196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.813256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.813271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.758 #35 NEW cov: 12476 ft: 14456 corp: 10/249b lim: 40 exec/s: 0 rss: 73Mb L: 19/37 MS: 1 ChangeByte- 00:06:19.758 [2024-11-26 19:32:54.853610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.853636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.853699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.853714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.853769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.853783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.853838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.853852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.758 #36 NEW cov: 12476 ft: 14492 corp: 11/284b lim: 40 exec/s: 0 rss: 73Mb L: 35/37 MS: 1 InsertRepeatedBytes- 00:06:19.758 [2024-11-26 19:32:54.893575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.893607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.893668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff40ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.893682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.893738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.893751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.758 #37 NEW cov: 12476 ft: 14529 corp: 12/311b lim: 40 exec/s: 0 rss: 73Mb L: 27/37 MS: 1 ChangeByte- 00:06:19.758 [2024-11-26 19:32:54.933673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffff01ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.933698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.933756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.933770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.933825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.933839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.758 #38 NEW cov: 12476 ft: 14570 corp: 13/339b lim: 40 exec/s: 0 rss: 73Mb L: 28/37 MS: 1 CopyPart- 00:06:19.758 [2024-11-26 19:32:54.973819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.973845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.973902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.973916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:54.973973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:54.973987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.758 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:19.758 #44 NEW cov: 12499 ft: 14616 corp: 14/368b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 1 ShuffleBytes- 00:06:19.758 [2024-11-26 19:32:55.034108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.758 [2024-11-26 19:32:55.034134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.758 [2024-11-26 19:32:55.034195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.759 [2024-11-26 19:32:55.034209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.759 [2024-11-26 19:32:55.034262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d4d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.759 [2024-11-26 19:32:55.034275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.759 [2024-11-26 19:32:55.034332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.759 [2024-11-26 19:32:55.034346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.019 #45 NEW cov: 12499 ft: 14647 corp: 15/405b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 ChangeBit- 00:06:20.019 [2024-11-26 19:32:55.094088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.094115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.094174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.094188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.094246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.094259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.019 #46 NEW cov: 12499 ft: 14677 corp: 16/435b lim: 40 exec/s: 46 rss: 73Mb L: 30/37 MS: 1 CrossOver- 00:06:20.019 [2024-11-26 19:32:55.134217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.134243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.134300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff01ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.134314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.134372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.134389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.019 #47 NEW cov: 12499 ft: 14740 corp: 17/463b lim: 40 exec/s: 47 rss: 74Mb L: 28/37 MS: 1 ChangeBinInt- 00:06:20.019 [2024-11-26 19:32:55.194536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.194563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.194621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.194636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.194694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffb0f9 cdw11:e355e4d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.194708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.194761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9100ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.194774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.019 #48 NEW cov: 12499 ft: 14754 corp: 18/497b lim: 40 exec/s: 48 rss: 74Mb L: 34/37 MS: 1 CMP- DE: "\260\371\343U\344\325\221\000"- 00:06:20.019 [2024-11-26 19:32:55.254539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.254566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.254622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:1dffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.254636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.254693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.254707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.019 #49 NEW cov: 12499 ft: 14758 corp: 19/526b lim: 40 exec/s: 49 rss: 74Mb L: 29/37 MS: 1 ChangeBinInt- 00:06:20.019 [2024-11-26 19:32:55.294806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.294832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.294893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.294907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.294960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.294974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.019 [2024-11-26 19:32:55.295031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff03 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.019 [2024-11-26 19:32:55.295048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.279 #50 NEW cov: 12499 ft: 14787 corp: 20/561b lim: 40 exec/s: 50 rss: 74Mb L: 35/37 MS: 1 ChangeBinInt- 00:06:20.279 [2024-11-26 19:32:55.354822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.279 [2024-11-26 19:32:55.354849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.279 [2024-11-26 19:32:55.354909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.354923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.354977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.354991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.280 #51 NEW cov: 12499 ft: 14799 corp: 21/589b lim: 40 exec/s: 51 rss: 74Mb L: 28/37 MS: 1 ChangeByte- 00:06:20.280 [2024-11-26 19:32:55.394727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.394752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.394813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffe355e4 cdw11:d59100ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.394827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.280 #52 NEW cov: 12499 ft: 14830 corp: 22/612b lim: 40 exec/s: 52 rss: 74Mb L: 23/37 MS: 1 EraseBytes- 00:06:20.280 [2024-11-26 19:32:55.455152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.455178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.455237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff01ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.455252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.455303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.455317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.280 #53 NEW cov: 12499 ft: 14845 corp: 23/640b lim: 40 exec/s: 53 rss: 74Mb L: 28/37 MS: 1 ShuffleBytes- 00:06:20.280 [2024-11-26 19:32:55.495267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.495293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.495349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.495363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.495418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff35ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.495436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.280 #54 NEW cov: 12499 ft: 14854 corp: 24/666b lim: 40 exec/s: 54 rss: 74Mb L: 26/37 MS: 1 CopyPart- 00:06:20.280 [2024-11-26 19:32:55.555439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffff1c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.555464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.555519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.555533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.280 [2024-11-26 19:32:55.555588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.280 [2024-11-26 19:32:55.555607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.539 #60 NEW cov: 12499 ft: 14860 corp: 25/694b lim: 40 exec/s: 60 rss: 74Mb L: 28/37 MS: 1 ChangeBinInt- 00:06:20.539 [2024-11-26 19:32:55.615769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:96d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.539 [2024-11-26 19:32:55.615794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.539 [2024-11-26 19:32:55.615854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.615869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.615926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.615940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.615995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.616008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.540 #61 NEW cov: 12499 ft: 14876 corp: 26/731b lim: 40 exec/s: 61 rss: 74Mb L: 37/37 MS: 1 ChangeBit- 00:06:20.540 [2024-11-26 19:32:55.655575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:fffeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.655607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.655667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffe355e4 cdw11:d59100ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.655681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.540 #62 NEW cov: 12499 ft: 14885 corp: 27/754b lim: 40 exec/s: 62 rss: 74Mb L: 23/37 MS: 1 ChangeBit- 00:06:20.540 [2024-11-26 19:32:55.715971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.715996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.716058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.716073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.716132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.716146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.540 #63 NEW cov: 12499 ft: 14898 corp: 28/783b lim: 40 exec/s: 63 rss: 74Mb L: 29/37 MS: 1 ChangeBit- 00:06:20.540 [2024-11-26 19:32:55.756051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.756077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.756137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.756151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.756208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffbaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.756222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.540 #64 NEW cov: 12499 ft: 14941 corp: 29/814b lim: 40 exec/s: 64 rss: 74Mb L: 31/37 MS: 1 InsertByte- 00:06:20.540 [2024-11-26 19:32:55.816202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.816227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.816284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff01ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.816298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.540 [2024-11-26 19:32:55.816352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.540 [2024-11-26 19:32:55.816365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.540 #65 NEW cov: 12499 ft: 14947 corp: 30/842b lim: 40 exec/s: 65 rss: 74Mb L: 28/37 MS: 1 ShuffleBytes- 00:06:20.800 [2024-11-26 19:32:55.856149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.856175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.856233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.856247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 #66 NEW cov: 12499 ft: 14962 corp: 31/861b lim: 40 exec/s: 66 rss: 74Mb L: 19/37 MS: 1 CrossOver- 00:06:20.800 [2024-11-26 19:32:55.896435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.896464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.896525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff3affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.896539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.896594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.896612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.800 #67 NEW cov: 12499 ft: 14963 corp: 32/888b lim: 40 exec/s: 67 rss: 74Mb L: 27/37 MS: 1 InsertByte- 00:06:20.800 [2024-11-26 19:32:55.956615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.956641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.956697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.956712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.956766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.956780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.800 #68 NEW cov: 12499 ft: 14967 corp: 33/918b lim: 40 exec/s: 68 rss: 74Mb L: 30/37 MS: 1 ChangeBit- 00:06:20.800 [2024-11-26 19:32:55.996722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.996747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.996807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.996821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:55.996876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:55.996890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.800 #69 NEW cov: 12499 ft: 14977 corp: 34/945b lim: 40 exec/s: 69 rss: 74Mb L: 27/37 MS: 1 CopyPart- 00:06:20.800 [2024-11-26 19:32:56.036852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.036878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:56.036935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.036949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:56.037005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.037022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.800 #70 NEW cov: 12499 ft: 14991 corp: 35/973b lim: 40 exec/s: 70 rss: 74Mb L: 28/37 MS: 1 InsertByte- 00:06:20.800 [2024-11-26 19:32:56.076919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2121ffff cdw11:28ffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.076944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:56.077006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.077020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.800 [2024-11-26 19:32:56.077094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.800 [2024-11-26 19:32:56.077109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.800 #71 NEW cov: 12499 ft: 15002 corp: 36/1002b lim: 40 exec/s: 71 rss: 74Mb L: 29/37 MS: 1 InsertByte- 00:06:21.060 [2024-11-26 19:32:56.117079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.060 [2024-11-26 19:32:56.117106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.060 [2024-11-26 19:32:56.117164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffb0f9e3 cdw11:55e4d591 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.060 [2024-11-26 19:32:56.117178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.060 [2024-11-26 19:32:56.117238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff35ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.060 [2024-11-26 19:32:56.117251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.060 #72 NEW cov: 12499 ft: 15027 corp: 37/1028b lim: 40 exec/s: 36 rss: 74Mb L: 26/37 MS: 1 PersAutoDict- DE: "\260\371\343U\344\325\221\000"- 00:06:21.060 #72 DONE cov: 12499 ft: 15027 corp: 37/1028b lim: 40 exec/s: 36 rss: 74Mb 00:06:21.060 ###### Recommended dictionary. ###### 00:06:21.060 "\260\371\343U\344\325\221\000" # Uses: 1 00:06:21.060 ###### End of recommended dictionary. ###### 00:06:21.060 Done 72 runs in 2 second(s) 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:21.060 19:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:21.060 [2024-11-26 19:32:56.289962] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:21.060 [2024-11-26 19:32:56.290033] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123804 ] 00:06:21.320 [2024-11-26 19:32:56.550997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.320 [2024-11-26 19:32:56.606429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.579 [2024-11-26 19:32:56.665782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.579 [2024-11-26 19:32:56.682140] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:21.579 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.579 INFO: Seed: 4203227027 00:06:21.579 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:21.579 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:21.579 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:21.579 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.579 #2 INITED exec/s: 0 rss: 65Mb 00:06:21.579 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:21.579 This may also happen if the target rejected all inputs we tried so far 00:06:21.579 [2024-11-26 19:32:56.753607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.579 [2024-11-26 19:32:56.753647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.579 [2024-11-26 19:32:56.753720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.579 [2024-11-26 19:32:56.753735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.579 [2024-11-26 19:32:56.753807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.579 [2024-11-26 19:32:56.753822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.839 NEW_FUNC[1/717]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:21.839 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:21.839 #18 NEW cov: 12270 ft: 12271 corp: 2/27b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:21.839 [2024-11-26 19:32:57.092954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.839 [2024-11-26 19:32:57.092995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.839 #19 NEW cov: 12383 ft: 13677 corp: 3/36b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:06:21.839 [2024-11-26 19:32:57.133005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.839 [2024-11-26 19:32:57.133034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.098 #20 NEW cov: 12389 ft: 14066 corp: 4/45b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ChangeBit- 00:06:22.098 [2024-11-26 19:32:57.193113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.098 [2024-11-26 19:32:57.193140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.098 #21 NEW cov: 12474 ft: 14273 corp: 5/54b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 CopyPart- 00:06:22.098 [2024-11-26 19:32:57.253311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020100 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.098 [2024-11-26 19:32:57.253338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.098 #22 NEW cov: 12474 ft: 14441 corp: 6/63b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ChangeBinInt- 00:06:22.098 [2024-11-26 19:32:57.293409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.098 [2024-11-26 19:32:57.293436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.098 #23 NEW cov: 12474 ft: 14546 corp: 7/72b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.098 [2024-11-26 19:32:57.333567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.098 [2024-11-26 19:32:57.333593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.098 #24 NEW cov: 12474 ft: 14592 corp: 8/81b lim: 40 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ShuffleBytes- 00:06:22.099 [2024-11-26 19:32:57.394561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.099 [2024-11-26 19:32:57.394588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.099 [2024-11-26 19:32:57.394721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8686868d cdw11:8d8d8d8d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.099 [2024-11-26 19:32:57.394738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.099 [2024-11-26 19:32:57.394858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:8d868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.099 [2024-11-26 19:32:57.394876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.099 [2024-11-26 19:32:57.394993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.099 [2024-11-26 19:32:57.395008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.358 #25 NEW cov: 12474 ft: 14964 corp: 9/113b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:22.358 [2024-11-26 19:32:57.464210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.464240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.358 [2024-11-26 19:32:57.464376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00fefefe cdw11:fefefefe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.464393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.358 #26 NEW cov: 12474 ft: 15206 corp: 10/132b lim: 40 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 InsertRepeatedBytes- 00:06:22.358 [2024-11-26 19:32:57.504345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.504371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.358 [2024-11-26 19:32:57.504494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.504510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.358 #27 NEW cov: 12474 ft: 15234 corp: 11/151b lim: 40 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 InsertRepeatedBytes- 00:06:22.358 [2024-11-26 19:32:57.544190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f6fdff06 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.544216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.358 #28 NEW cov: 12474 ft: 15331 corp: 12/160b lim: 40 exec/s: 0 rss: 73Mb L: 9/32 MS: 1 ChangeBinInt- 00:06:22.358 [2024-11-26 19:32:57.584204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f6fdff06 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.584232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.358 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:22.358 #29 NEW cov: 12497 ft: 15370 corp: 13/169b lim: 40 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 ShuffleBytes- 00:06:22.358 [2024-11-26 19:32:57.644444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.358 [2024-11-26 19:32:57.644471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.618 #35 NEW cov: 12497 ft: 15385 corp: 14/178b lim: 40 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 ChangeByte- 00:06:22.618 [2024-11-26 19:32:57.704608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f6fdffb2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.704635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.618 #36 NEW cov: 12497 ft: 15389 corp: 15/187b lim: 40 exec/s: 36 rss: 74Mb L: 9/32 MS: 1 ChangeByte- 00:06:22.618 [2024-11-26 19:32:57.764772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a060100 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.764799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.618 #37 NEW cov: 12497 ft: 15400 corp: 16/196b lim: 40 exec/s: 37 rss: 74Mb L: 9/32 MS: 1 ChangeBinInt- 00:06:22.618 [2024-11-26 19:32:57.825442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.825472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.618 [2024-11-26 19:32:57.825586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.825605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.618 [2024-11-26 19:32:57.825726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.825743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.618 #38 NEW cov: 12497 ft: 15430 corp: 17/226b lim: 40 exec/s: 38 rss: 74Mb L: 30/32 MS: 1 InsertRepeatedBytes- 00:06:22.618 [2024-11-26 19:32:57.875346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.875375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.618 [2024-11-26 19:32:57.875502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:13fefefe cdw11:fefefefe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.618 [2024-11-26 19:32:57.875518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.618 #39 NEW cov: 12497 ft: 15474 corp: 18/245b lim: 40 exec/s: 39 rss: 74Mb L: 19/32 MS: 1 ChangeBinInt- 00:06:22.878 [2024-11-26 19:32:57.945336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f6fdff19 cdw11:19191919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:57.945364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.878 #41 NEW cov: 12497 ft: 15594 corp: 19/257b lim: 40 exec/s: 41 rss: 74Mb L: 12/32 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:22.878 [2024-11-26 19:32:57.985637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f6fdffb2 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:57.985664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:57.985775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:57.985791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.878 #42 NEW cov: 12497 ft: 15608 corp: 20/274b lim: 40 exec/s: 42 rss: 74Mb L: 17/32 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.878 [2024-11-26 19:32:58.046079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.046107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:58.046232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.046250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:58.046372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:86868686 cdw11:86408686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.046389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.878 #43 NEW cov: 12497 ft: 15630 corp: 21/301b lim: 40 exec/s: 43 rss: 74Mb L: 27/32 MS: 1 InsertByte- 00:06:22.878 [2024-11-26 19:32:58.085661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000080 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.085689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.878 #44 NEW cov: 12497 ft: 15686 corp: 22/310b lim: 40 exec/s: 44 rss: 74Mb L: 9/32 MS: 1 ChangeBit- 00:06:22.878 [2024-11-26 19:32:58.126584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a860a02 cdw11:01ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.126615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:58.126736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.126752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:58.126878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:8686ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.126895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.878 [2024-11-26 19:32:58.127020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.127037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.878 #45 NEW cov: 12497 ft: 15697 corp: 23/347b lim: 40 exec/s: 45 rss: 74Mb L: 37/37 MS: 1 CrossOver- 00:06:22.878 [2024-11-26 19:32:58.186724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.878 [2024-11-26 19:32:58.186750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.186873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.186890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.187012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.187029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.187155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.187172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.138 #46 NEW cov: 12497 ft: 15730 corp: 24/386b lim: 40 exec/s: 46 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:23.138 [2024-11-26 19:32:58.246632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a863086 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.246658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.246783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.246798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.246927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.246944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.138 #47 NEW cov: 12497 ft: 15739 corp: 25/413b lim: 40 exec/s: 47 rss: 74Mb L: 27/39 MS: 1 InsertByte- 00:06:23.138 [2024-11-26 19:32:58.286740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:cdcdcdcd cdw11:cdcdcdcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.286767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.286889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcdcdcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.286904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.287025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cdcdcdcd cdw11:cdcdcdcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.287041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.138 #49 NEW cov: 12497 ft: 15812 corp: 26/439b lim: 40 exec/s: 49 rss: 74Mb L: 26/39 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:23.138 [2024-11-26 19:32:58.326841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.326868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.326994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.327009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.138 [2024-11-26 19:32:58.327121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.327138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.138 #50 NEW cov: 12497 ft: 15817 corp: 27/464b lim: 40 exec/s: 50 rss: 75Mb L: 25/39 MS: 1 EraseBytes- 00:06:23.138 [2024-11-26 19:32:58.387299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.138 [2024-11-26 19:32:58.387327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.387449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.387466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.387586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.387603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.387731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.387751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.139 #51 NEW cov: 12497 ft: 15834 corp: 28/502b lim: 40 exec/s: 51 rss: 75Mb L: 38/39 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:23.139 [2024-11-26 19:32:58.427396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.427422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.427536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8686868d cdw11:8d8d8d8d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.427553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.427680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:8d868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.427696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.139 [2024-11-26 19:32:58.427819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:aaaaaaaa cdw11:aaaaaa86 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.139 [2024-11-26 19:32:58.427834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.398 #52 NEW cov: 12497 ft: 15846 corp: 29/541b lim: 40 exec/s: 52 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:23.398 [2024-11-26 19:32:58.486757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.486783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.398 #53 NEW cov: 12497 ft: 15858 corp: 30/552b lim: 40 exec/s: 53 rss: 75Mb L: 11/39 MS: 1 CopyPart- 00:06:23.398 [2024-11-26 19:32:58.526968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:08fcfcfc cdw11:fcfcfcfc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.526996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.398 #58 NEW cov: 12497 ft: 15877 corp: 31/562b lim: 40 exec/s: 58 rss: 75Mb L: 10/39 MS: 5 CopyPart-ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:23.398 [2024-11-26 19:32:58.577910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.577938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.398 [2024-11-26 19:32:58.578055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8686868d cdw11:aaaaaa86 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.578072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.398 [2024-11-26 19:32:58.578198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:86868686 cdw11:86868686 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.578216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.398 [2024-11-26 19:32:58.578347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:aaaaaaaa cdw11:aaaaaa86 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.578363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.398 #59 NEW cov: 12497 ft: 15888 corp: 32/601b lim: 40 exec/s: 59 rss: 75Mb L: 39/39 MS: 1 CopyPart- 00:06:23.398 [2024-11-26 19:32:58.647393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201bc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.647421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.398 #60 NEW cov: 12497 ft: 15896 corp: 33/611b lim: 40 exec/s: 60 rss: 75Mb L: 10/39 MS: 1 InsertByte- 00:06:23.398 [2024-11-26 19:32:58.687408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.398 [2024-11-26 19:32:58.687434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.657 #61 NEW cov: 12497 ft: 15900 corp: 34/620b lim: 40 exec/s: 61 rss: 75Mb L: 9/39 MS: 1 CopyPart- 00:06:23.657 [2024-11-26 19:32:58.728341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0201ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.657 [2024-11-26 19:32:58.728369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.657 [2024-11-26 19:32:58.728492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5bffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.657 [2024-11-26 19:32:58.728509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.657 [2024-11-26 19:32:58.728625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.657 [2024-11-26 19:32:58.728642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.657 [2024-11-26 19:32:58.728762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.657 [2024-11-26 19:32:58.728778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.657 #62 NEW cov: 12497 ft: 15909 corp: 35/659b lim: 40 exec/s: 31 rss: 75Mb L: 39/39 MS: 1 InsertByte- 00:06:23.657 #62 DONE cov: 12497 ft: 15909 corp: 35/659b lim: 40 exec/s: 31 rss: 75Mb 00:06:23.657 ###### Recommended dictionary. ###### 00:06:23.657 "\002\000\000\000\000\000\000\000" # Uses: 3 00:06:23.657 ###### End of recommended dictionary. ###### 00:06:23.657 Done 62 runs in 2 second(s) 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.658 19:32:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:23.658 [2024-11-26 19:32:58.921870] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:23.658 [2024-11-26 19:32:58.921936] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124344 ] 00:06:23.917 [2024-11-26 19:32:59.177081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.175 [2024-11-26 19:32:59.236071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.175 [2024-11-26 19:32:59.294817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.175 [2024-11-26 19:32:59.311161] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:24.175 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.175 INFO: Seed: 2539256733 00:06:24.175 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:24.175 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:24.175 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:24.175 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.175 #2 INITED exec/s: 0 rss: 65Mb 00:06:24.176 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:24.176 This may also happen if the target rejected all inputs we tried so far 00:06:24.176 [2024-11-26 19:32:59.366462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.176 [2024-11-26 19:32:59.366491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.435 NEW_FUNC[1/716]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:24.435 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.435 #7 NEW cov: 12258 ft: 12249 corp: 2/9b lim: 40 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 InsertByte-ShuffleBytes-CrossOver-CrossOver-CMP- DE: "f\000\000\000"- 00:06:24.435 [2024-11-26 19:32:59.697618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.435 [2024-11-26 19:32:59.697658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.435 [2024-11-26 19:32:59.697720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.435 [2024-11-26 19:32:59.697737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.435 [2024-11-26 19:32:59.697797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.435 [2024-11-26 19:32:59.697818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.435 #8 NEW cov: 12371 ft: 12971 corp: 3/40b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:24.695 [2024-11-26 19:32:59.757454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66480000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.757484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 #13 NEW cov: 12377 ft: 13249 corp: 4/54b lim: 40 exec/s: 0 rss: 73Mb L: 14/31 MS: 5 PersAutoDict-CopyPart-CrossOver-InsertByte-CMP- DE: "f\000\000\000"-"H\000\000\000\000\000\000\000"- 00:06:24.695 [2024-11-26 19:32:59.797511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00282d0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.797539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 #14 NEW cov: 12462 ft: 13626 corp: 5/62b lim: 40 exec/s: 0 rss: 73Mb L: 8/31 MS: 1 ChangeByte- 00:06:24.695 [2024-11-26 19:32:59.837673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5f6600 cdw11:0000280a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.837700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 #15 NEW cov: 12462 ft: 13703 corp: 6/71b lim: 40 exec/s: 0 rss: 73Mb L: 9/31 MS: 1 InsertByte- 00:06:24.695 [2024-11-26 19:32:59.877874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.877900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 [2024-11-26 19:32:59.877959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.877973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.695 #19 NEW cov: 12462 ft: 13968 corp: 7/92b lim: 40 exec/s: 0 rss: 73Mb L: 21/31 MS: 4 EraseBytes-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:06:24.695 [2024-11-26 19:32:59.937958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66000000 cdw11:2d000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.937984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 #24 NEW cov: 12462 ft: 14007 corp: 8/101b lim: 40 exec/s: 0 rss: 73Mb L: 9/31 MS: 5 ShuffleBytes-ChangeBinInt-CrossOver-PersAutoDict-CopyPart- DE: "f\000\000\000"- 00:06:24.695 [2024-11-26 19:32:59.978198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.978223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.695 [2024-11-26 19:32:59.978282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.978295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.695 [2024-11-26 19:32:59.978349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.695 [2024-11-26 19:32:59.978362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.695 #29 NEW cov: 12462 ft: 14069 corp: 9/129b lim: 40 exec/s: 0 rss: 73Mb L: 28/31 MS: 5 ChangeByte-ShuffleBytes-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:24.955 [2024-11-26 19:33:00.018128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.018155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.955 #30 NEW cov: 12462 ft: 14126 corp: 10/144b lim: 40 exec/s: 0 rss: 73Mb L: 15/31 MS: 1 CrossOver- 00:06:24.955 [2024-11-26 19:33:00.058265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.058293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.955 #31 NEW cov: 12462 ft: 14205 corp: 11/153b lim: 40 exec/s: 0 rss: 73Mb L: 9/31 MS: 1 ChangeBinInt- 00:06:24.955 [2024-11-26 19:33:00.118425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.118451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.955 #32 NEW cov: 12462 ft: 14217 corp: 12/168b lim: 40 exec/s: 0 rss: 73Mb L: 15/31 MS: 1 ChangeBinInt- 00:06:24.955 [2024-11-26 19:33:00.178973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.178999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.955 [2024-11-26 19:33:00.179055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.179069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.955 [2024-11-26 19:33:00.179124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.179138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.955 [2024-11-26 19:33:00.179209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.179223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.955 #33 NEW cov: 12462 ft: 14766 corp: 13/205b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:24.955 [2024-11-26 19:33:00.238734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66480000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.955 [2024-11-26 19:33:00.238760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.215 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:25.215 #34 NEW cov: 12485 ft: 14815 corp: 14/219b lim: 40 exec/s: 0 rss: 73Mb L: 14/37 MS: 1 ChangeBit- 00:06:25.215 [2024-11-26 19:33:00.299260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.299287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.215 [2024-11-26 19:33:00.299343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.299360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.215 [2024-11-26 19:33:00.299417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.299430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.215 [2024-11-26 19:33:00.299489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff48 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.299502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.215 #35 NEW cov: 12485 ft: 14829 corp: 15/258b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:06:25.215 [2024-11-26 19:33:00.338964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66006648 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.338989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.215 #36 NEW cov: 12485 ft: 14854 corp: 16/273b lim: 40 exec/s: 36 rss: 74Mb L: 15/39 MS: 1 CrossOver- 00:06:25.215 [2024-11-26 19:33:00.399190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66006648 cdw11:00000066 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.399216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.215 #37 NEW cov: 12485 ft: 14936 corp: 17/288b lim: 40 exec/s: 37 rss: 74Mb L: 15/39 MS: 1 PersAutoDict- DE: "f\000\000\000"- 00:06:25.215 [2024-11-26 19:33:00.459359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:2a00280a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.459386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.215 #38 NEW cov: 12485 ft: 14955 corp: 18/297b lim: 40 exec/s: 38 rss: 74Mb L: 9/39 MS: 1 InsertByte- 00:06:25.215 [2024-11-26 19:33:00.499440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.215 [2024-11-26 19:33:00.499465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 #44 NEW cov: 12485 ft: 14995 corp: 19/306b lim: 40 exec/s: 44 rss: 74Mb L: 9/39 MS: 1 ChangeBit- 00:06:25.476 [2024-11-26 19:33:00.559874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66006648 cdw11:00000066 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.559901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 [2024-11-26 19:33:00.559959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.559973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.476 [2024-11-26 19:33:00.560029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.560043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.476 #45 NEW cov: 12485 ft: 15014 corp: 20/337b lim: 40 exec/s: 45 rss: 74Mb L: 31/39 MS: 1 InsertRepeatedBytes- 00:06:25.476 [2024-11-26 19:33:00.619796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.619825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 #46 NEW cov: 12485 ft: 15043 corp: 21/346b lim: 40 exec/s: 46 rss: 74Mb L: 9/39 MS: 1 CopyPart- 00:06:25.476 [2024-11-26 19:33:00.660230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.660256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 [2024-11-26 19:33:00.660313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.660327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.476 [2024-11-26 19:33:00.660381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.660395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.476 [2024-11-26 19:33:00.660451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff48 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.660465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.476 #47 NEW cov: 12485 ft: 15048 corp: 22/385b lim: 40 exec/s: 47 rss: 74Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:25.476 [2024-11-26 19:33:00.720097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:280a5f66 cdw11:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.720124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 #48 NEW cov: 12485 ft: 15057 corp: 23/395b lim: 40 exec/s: 48 rss: 74Mb L: 10/39 MS: 1 CopyPart- 00:06:25.476 [2024-11-26 19:33:00.760176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66480100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.476 [2024-11-26 19:33:00.760202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.476 #49 NEW cov: 12485 ft: 15071 corp: 24/409b lim: 40 exec/s: 49 rss: 74Mb L: 14/39 MS: 1 ChangeBinInt- 00:06:25.735 [2024-11-26 19:33:00.800515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.735 [2024-11-26 19:33:00.800541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.735 [2024-11-26 19:33:00.800601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.800616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.736 [2024-11-26 19:33:00.800674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2bffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.800688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.736 #50 NEW cov: 12485 ft: 15080 corp: 25/440b lim: 40 exec/s: 50 rss: 74Mb L: 31/39 MS: 1 ChangeByte- 00:06:25.736 [2024-11-26 19:33:00.840435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66480000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.840464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.736 #51 NEW cov: 12485 ft: 15092 corp: 26/455b lim: 40 exec/s: 51 rss: 74Mb L: 15/39 MS: 1 InsertByte- 00:06:25.736 [2024-11-26 19:33:00.900513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2866000a cdw11:2800005f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.900540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.736 #52 NEW cov: 12485 ft: 15102 corp: 27/465b lim: 40 exec/s: 52 rss: 74Mb L: 10/39 MS: 1 ShuffleBytes- 00:06:25.736 [2024-11-26 19:33:00.960879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66480000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.960907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.736 [2024-11-26 19:33:00.960965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:66000000 cdw11:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:00.960980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.736 #53 NEW cov: 12485 ft: 15106 corp: 28/483b lim: 40 exec/s: 53 rss: 74Mb L: 18/39 MS: 1 PersAutoDict- DE: "f\000\000\000"- 00:06:25.736 [2024-11-26 19:33:01.000837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:664f4f4f cdw11:4f4f4f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:01.000864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.736 #56 NEW cov: 12485 ft: 15164 corp: 29/496b lim: 40 exec/s: 56 rss: 74Mb L: 13/39 MS: 3 ChangeByte-PersAutoDict-InsertRepeatedBytes- DE: "f\000\000\000"- 00:06:25.736 [2024-11-26 19:33:01.041132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:66004800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:01.041159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.736 [2024-11-26 19:33:01.041219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:2d000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.736 [2024-11-26 19:33:01.041233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.995 #57 NEW cov: 12485 ft: 15196 corp: 30/513b lim: 40 exec/s: 57 rss: 74Mb L: 17/39 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:06:25.995 [2024-11-26 19:33:01.081207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.995 [2024-11-26 19:33:01.081234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.081292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.081306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.996 #60 NEW cov: 12485 ft: 15206 corp: 31/531b lim: 40 exec/s: 60 rss: 74Mb L: 18/39 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:25.996 [2024-11-26 19:33:01.121483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.121511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.121573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:28214066 cdw11:48000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.121588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.121652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.121668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.996 #61 NEW cov: 12485 ft: 15215 corp: 32/556b lim: 40 exec/s: 61 rss: 74Mb L: 25/39 MS: 1 CopyPart- 00:06:25.996 [2024-11-26 19:33:01.161295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:48000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.161321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.996 #62 NEW cov: 12485 ft: 15218 corp: 33/565b lim: 40 exec/s: 62 rss: 74Mb L: 9/39 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:06:25.996 [2024-11-26 19:33:01.221559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:66000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.221587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.996 #63 NEW cov: 12485 ft: 15225 corp: 34/573b lim: 40 exec/s: 63 rss: 74Mb L: 8/39 MS: 1 PersAutoDict- DE: "f\000\000\000"- 00:06:25.996 [2024-11-26 19:33:01.261991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.262018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.262077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.262091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.262148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2bffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.262162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.996 [2024-11-26 19:33:01.262221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.996 [2024-11-26 19:33:01.262235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.254 #64 NEW cov: 12485 ft: 15226 corp: 35/608b lim: 40 exec/s: 64 rss: 74Mb L: 35/39 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:26.254 [2024-11-26 19:33:01.322070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.322099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.254 [2024-11-26 19:33:01.322160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.322176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.254 [2024-11-26 19:33:01.322233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2bffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.322251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.254 [2024-11-26 19:33:01.362154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a660000 cdw11:00280a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.362181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.254 [2024-11-26 19:33:01.362234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:0aff00ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.362249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.254 [2024-11-26 19:33:01.362305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff2bffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.254 [2024-11-26 19:33:01.362320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.254 #66 NEW cov: 12485 ft: 15235 corp: 36/639b lim: 40 exec/s: 33 rss: 74Mb L: 31/39 MS: 2 ChangeByte-CopyPart- 00:06:26.254 #66 DONE cov: 12485 ft: 15235 corp: 36/639b lim: 40 exec/s: 33 rss: 74Mb 00:06:26.254 ###### Recommended dictionary. ###### 00:06:26.254 "f\000\000\000" # Uses: 6 00:06:26.254 "H\000\000\000\000\000\000\000" # Uses: 3 00:06:26.254 "\000\000\000\000" # Uses: 0 00:06:26.254 ###### End of recommended dictionary. ###### 00:06:26.254 Done 66 runs in 2 second(s) 00:06:26.254 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.254 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.254 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.255 19:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:26.255 [2024-11-26 19:33:01.535652] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:26.255 [2024-11-26 19:33:01.535726] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124980 ] 00:06:26.514 [2024-11-26 19:33:01.799847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.773 [2024-11-26 19:33:01.859667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.773 [2024-11-26 19:33:01.919011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.773 [2024-11-26 19:33:01.935347] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:26.773 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.773 INFO: Seed: 868319089 00:06:26.773 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:26.773 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:26.773 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:26.773 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.773 #2 INITED exec/s: 0 rss: 65Mb 00:06:26.773 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.773 This may also happen if the target rejected all inputs we tried so far 00:06:26.773 [2024-11-26 19:33:01.991253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.773 [2024-11-26 19:33:01.991283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.773 [2024-11-26 19:33:01.991339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.773 [2024-11-26 19:33:01.991353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.773 [2024-11-26 19:33:01.991410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.773 [2024-11-26 19:33:01.991423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.032 NEW_FUNC[1/719]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:27.032 NEW_FUNC[2/719]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:27.032 #13 NEW cov: 12285 ft: 12283 corp: 2/33b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:27.032 [2024-11-26 19:33:02.333596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.032 [2024-11-26 19:33:02.333652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.032 [2024-11-26 19:33:02.333800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.032 [2024-11-26 19:33:02.333824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.032 [2024-11-26 19:33:02.333971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.032 [2024-11-26 19:33:02.333995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.292 #14 NEW cov: 12398 ft: 13085 corp: 3/66b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:06:27.292 [2024-11-26 19:33:02.403687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.403718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.292 [2024-11-26 19:33:02.403861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.403879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.292 [2024-11-26 19:33:02.404010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.404029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.292 #15 NEW cov: 12404 ft: 13344 corp: 4/99b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:06:27.292 [2024-11-26 19:33:02.472832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.472871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.292 #26 NEW cov: 12496 ft: 14373 corp: 5/111b lim: 35 exec/s: 0 rss: 73Mb L: 12/33 MS: 1 InsertRepeatedBytes- 00:06:27.292 [2024-11-26 19:33:02.534029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.534060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.292 [2024-11-26 19:33:02.534195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.534222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.292 [2024-11-26 19:33:02.534352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.292 [2024-11-26 19:33:02.534370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.292 #27 NEW cov: 12496 ft: 14531 corp: 6/145b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:06:27.551 [2024-11-26 19:33:02.604559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.604592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.604732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.604754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.604886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.604905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.605052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.605069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.551 #28 NEW cov: 12496 ft: 14794 corp: 7/180b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertByte- 00:06:27.551 [2024-11-26 19:33:02.674407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.674435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.674571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.674591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.674733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.551 [2024-11-26 19:33:02.674750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.551 [2024-11-26 19:33:02.674890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.674906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.552 #29 NEW cov: 12496 ft: 14851 corp: 8/213b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ChangeBit- 00:06:27.552 [2024-11-26 19:33:02.724632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.724663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.552 [2024-11-26 19:33:02.724804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.724823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.552 [2024-11-26 19:33:02.724965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.724982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.552 #30 NEW cov: 12496 ft: 14887 corp: 9/245b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:06:27.552 [2024-11-26 19:33:02.774669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.774698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.552 [2024-11-26 19:33:02.774827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.774844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.552 [2024-11-26 19:33:02.774974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.774991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.552 #31 NEW cov: 12496 ft: 14923 corp: 10/277b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:06:27.552 [2024-11-26 19:33:02.843944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.552 [2024-11-26 19:33:02.843979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.812 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:27.812 #32 NEW cov: 12519 ft: 15010 corp: 11/290b lim: 35 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 CrossOver- 00:06:27.812 [2024-11-26 19:33:02.914983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.915011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:02.915143] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.915165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:02.915297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.915316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:02.915460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.915476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.812 #33 NEW cov: 12519 ft: 15087 corp: 12/323b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:27.812 [2024-11-26 19:33:02.965297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.965326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:02.965464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.965481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:02.965609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:02.965626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.812 #34 NEW cov: 12519 ft: 15097 corp: 13/355b lim: 35 exec/s: 34 rss: 74Mb L: 32/35 MS: 1 ChangeBit- 00:06:27.812 [2024-11-26 19:33:03.035510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.035540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:03.035673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.035690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:03.035827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.035846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.812 #35 NEW cov: 12519 ft: 15117 corp: 14/388b lim: 35 exec/s: 35 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:06:27.812 [2024-11-26 19:33:03.085684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.085713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:03.085860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.085879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.812 [2024-11-26 19:33:03.086014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.812 [2024-11-26 19:33:03.086033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.072 #36 NEW cov: 12519 ft: 15159 corp: 15/421b lim: 35 exec/s: 36 rss: 74Mb L: 33/35 MS: 1 InsertByte- 00:06:28.072 [2024-11-26 19:33:03.154842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.154877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.072 #37 NEW cov: 12519 ft: 15188 corp: 16/434b lim: 35 exec/s: 37 rss: 74Mb L: 13/35 MS: 1 InsertByte- 00:06:28.072 [2024-11-26 19:33:03.205643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.205677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.205832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.205859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.205992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.206013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.072 #38 NEW cov: 12519 ft: 15265 corp: 17/457b lim: 35 exec/s: 38 rss: 74Mb L: 23/35 MS: 1 CopyPart- 00:06:28.072 [2024-11-26 19:33:03.276175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.276203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.276339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.276356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.276485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.276502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.276632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.276650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.072 #39 NEW cov: 12519 ft: 15282 corp: 18/489b lim: 35 exec/s: 39 rss: 74Mb L: 32/35 MS: 1 ShuffleBytes- 00:06:28.072 [2024-11-26 19:33:03.325817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.325845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.072 #40 NEW cov: 12519 ft: 15398 corp: 19/506b lim: 35 exec/s: 40 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:06:28.072 [2024-11-26 19:33:03.376603] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.376632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.376769] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.376787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.072 [2024-11-26 19:33:03.376920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.072 [2024-11-26 19:33:03.376943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.331 #41 NEW cov: 12519 ft: 15428 corp: 20/538b lim: 35 exec/s: 41 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:06:28.331 [2024-11-26 19:33:03.426483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.426511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.426644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.426660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.331 #42 NEW cov: 12519 ft: 15594 corp: 21/559b lim: 35 exec/s: 42 rss: 74Mb L: 21/35 MS: 1 CrossOver- 00:06:28.331 [2024-11-26 19:33:03.496667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.496695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.496850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000015 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.496868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.331 #43 NEW cov: 12519 ft: 15611 corp: 22/580b lim: 35 exec/s: 43 rss: 74Mb L: 21/35 MS: 1 ChangeBinInt- 00:06:28.331 [2024-11-26 19:33:03.567204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.567233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.567376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.567396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.567518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.567538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.331 #44 NEW cov: 12519 ft: 15627 corp: 23/612b lim: 35 exec/s: 44 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:06:28.331 [2024-11-26 19:33:03.617242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.617272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.617408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.617427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.331 [2024-11-26 19:33:03.617553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.331 [2024-11-26 19:33:03.617568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.331 #45 NEW cov: 12519 ft: 15656 corp: 24/644b lim: 35 exec/s: 45 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:06:28.591 [2024-11-26 19:33:03.666778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.666806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.666942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.666960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.591 #46 NEW cov: 12519 ft: 15677 corp: 25/662b lim: 35 exec/s: 46 rss: 74Mb L: 18/35 MS: 1 EraseBytes- 00:06:28.591 [2024-11-26 19:33:03.737693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.737727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.737878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.737895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.738027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.738045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.591 #47 NEW cov: 12519 ft: 15679 corp: 26/695b lim: 35 exec/s: 47 rss: 74Mb L: 33/35 MS: 1 ChangeBinInt- 00:06:28.591 [2024-11-26 19:33:03.787602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.787632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.787770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000015 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.787787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.591 #48 NEW cov: 12519 ft: 15698 corp: 27/716b lim: 35 exec/s: 48 rss: 74Mb L: 21/35 MS: 1 CopyPart- 00:06:28.591 [2024-11-26 19:33:03.858085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.858116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.858251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.858271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.591 [2024-11-26 19:33:03.858406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.591 [2024-11-26 19:33:03.858425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.591 #49 NEW cov: 12519 ft: 15728 corp: 28/749b lim: 35 exec/s: 49 rss: 74Mb L: 33/35 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:06:28.852 [2024-11-26 19:33:03.908160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.852 [2024-11-26 19:33:03.908191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.852 [2024-11-26 19:33:03.908323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.852 [2024-11-26 19:33:03.908341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.852 [2024-11-26 19:33:03.908470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.852 [2024-11-26 19:33:03.908491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.852 [2024-11-26 19:33:03.908625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.852 [2024-11-26 19:33:03.908644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.852 #50 NEW cov: 12519 ft: 15846 corp: 29/784b lim: 35 exec/s: 50 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:28.852 [2024-11-26 19:33:03.977789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.852 [2024-11-26 19:33:03.977817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.852 #51 NEW cov: 12519 ft: 15879 corp: 30/802b lim: 35 exec/s: 25 rss: 74Mb L: 18/35 MS: 1 EraseBytes- 00:06:28.852 #51 DONE cov: 12519 ft: 15879 corp: 30/802b lim: 35 exec/s: 25 rss: 74Mb 00:06:28.852 ###### Recommended dictionary. ###### 00:06:28.852 "G\000\000\000\000\000\000\000" # Uses: 0 00:06:28.852 ###### End of recommended dictionary. ###### 00:06:28.852 Done 51 runs in 2 second(s) 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.852 19:33:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:28.852 [2024-11-26 19:33:04.156221] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:28.852 [2024-11-26 19:33:04.156288] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125495 ] 00:06:29.111 [2024-11-26 19:33:04.417392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.371 [2024-11-26 19:33:04.465211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.371 [2024-11-26 19:33:04.524445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.371 [2024-11-26 19:33:04.540838] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:29.371 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.371 INFO: Seed: 3473315122 00:06:29.371 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:29.371 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:29.371 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:29.371 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.371 #2 INITED exec/s: 0 rss: 65Mb 00:06:29.371 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.371 This may also happen if the target rejected all inputs we tried so far 00:06:29.371 [2024-11-26 19:33:04.596655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.371 [2024-11-26 19:33:04.596694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.371 [2024-11-26 19:33:04.596763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.371 [2024-11-26 19:33:04.596783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.371 [2024-11-26 19:33:04.596852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.371 [2024-11-26 19:33:04.596871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.630 NEW_FUNC[1/717]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:29.630 NEW_FUNC[2/717]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:29.630 #5 NEW cov: 12254 ft: 12253 corp: 2/30b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:29.890 #6 NEW cov: 12367 ft: 13401 corp: 3/39b lim: 35 exec/s: 0 rss: 73Mb L: 9/29 MS: 1 CMP- DE: "\000\221\325\357\235\304#\230"- 00:06:29.890 [2024-11-26 19:33:04.987358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.890 [2024-11-26 19:33:04.987393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.890 [2024-11-26 19:33:04.987454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.890 [2024-11-26 19:33:04.987469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.890 #12 NEW cov: 12373 ft: 13727 corp: 4/61b lim: 35 exec/s: 0 rss: 73Mb L: 22/29 MS: 1 InsertRepeatedBytes- 00:06:29.890 #13 NEW cov: 12458 ft: 14016 corp: 5/70b lim: 35 exec/s: 0 rss: 73Mb L: 9/29 MS: 1 CopyPart- 00:06:29.890 [2024-11-26 19:33:05.087772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.890 [2024-11-26 19:33:05.087803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.890 [2024-11-26 19:33:05.087863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.890 [2024-11-26 19:33:05.087878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.890 [2024-11-26 19:33:05.087937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.890 [2024-11-26 19:33:05.087954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.890 #14 NEW cov: 12458 ft: 14095 corp: 6/103b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\012\000\000\000"- 00:06:29.890 #15 NEW cov: 12458 ft: 14185 corp: 7/112b lim: 35 exec/s: 0 rss: 73Mb L: 9/33 MS: 1 ChangeBit- 00:06:30.149 [2024-11-26 19:33:05.208148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.149 [2024-11-26 19:33:05.208179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.149 [2024-11-26 19:33:05.208238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.149 [2024-11-26 19:33:05.208252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.149 [2024-11-26 19:33:05.208312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.149 [2024-11-26 19:33:05.208326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.149 #16 NEW cov: 12458 ft: 14305 corp: 8/141b lim: 35 exec/s: 0 rss: 73Mb L: 29/33 MS: 1 ChangeByte- 00:06:30.149 #17 NEW cov: 12458 ft: 14350 corp: 9/150b lim: 35 exec/s: 0 rss: 73Mb L: 9/33 MS: 1 CMP- DE: "\036\000"- 00:06:30.149 #18 NEW cov: 12458 ft: 14430 corp: 10/159b lim: 35 exec/s: 0 rss: 73Mb L: 9/33 MS: 1 ChangeBinInt- 00:06:30.149 [2024-11-26 19:33:05.348403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.149 [2024-11-26 19:33:05.348435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.149 [2024-11-26 19:33:05.348496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.149 [2024-11-26 19:33:05.348511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.149 #19 NEW cov: 12458 ft: 14483 corp: 11/181b lim: 35 exec/s: 0 rss: 73Mb L: 22/33 MS: 1 CrossOver- 00:06:30.149 #20 NEW cov: 12458 ft: 14529 corp: 12/194b lim: 35 exec/s: 0 rss: 73Mb L: 13/33 MS: 1 CrossOver- 00:06:30.409 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:30.409 #21 NEW cov: 12481 ft: 14626 corp: 13/204b lim: 35 exec/s: 0 rss: 73Mb L: 10/33 MS: 1 InsertByte- 00:06:30.409 #22 NEW cov: 12481 ft: 14656 corp: 14/216b lim: 35 exec/s: 0 rss: 73Mb L: 12/33 MS: 1 InsertRepeatedBytes- 00:06:30.409 [2024-11-26 19:33:05.529016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.529046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.409 [2024-11-26 19:33:05.529106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.529121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.409 [2024-11-26 19:33:05.529182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.529198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.409 #23 NEW cov: 12481 ft: 14723 corp: 15/250b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:06:30.409 #24 NEW cov: 12481 ft: 14737 corp: 16/260b lim: 35 exec/s: 24 rss: 74Mb L: 10/34 MS: 1 ChangeBit- 00:06:30.409 [2024-11-26 19:33:05.649091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000004ef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.649124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.409 #27 NEW cov: 12481 ft: 14962 corp: 17/274b lim: 35 exec/s: 27 rss: 74Mb L: 14/34 MS: 3 EraseBytes-ChangeBit-PersAutoDict- DE: "\000\221\325\357\235\304#\230"- 00:06:30.409 [2024-11-26 19:33:05.709528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.709557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.409 [2024-11-26 19:33:05.709618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.709633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.409 [2024-11-26 19:33:05.709691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.409 [2024-11-26 19:33:05.709706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.668 NEW_FUNC[1/1]: 0x46ba68 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:06:30.668 #28 NEW cov: 12504 ft: 15007 corp: 18/303b lim: 35 exec/s: 28 rss: 74Mb L: 29/34 MS: 1 ChangeBit- 00:06:30.668 #29 NEW cov: 12504 ft: 15017 corp: 19/313b lim: 35 exec/s: 29 rss: 74Mb L: 10/34 MS: 1 CopyPart- 00:06:30.668 #30 NEW cov: 12504 ft: 15028 corp: 20/326b lim: 35 exec/s: 30 rss: 74Mb L: 13/34 MS: 1 CrossOver- 00:06:30.668 [2024-11-26 19:33:05.829623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.668 [2024-11-26 19:33:05.829652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.668 #31 NEW cov: 12504 ft: 15051 corp: 21/343b lim: 35 exec/s: 31 rss: 74Mb L: 17/34 MS: 1 InsertRepeatedBytes- 00:06:30.668 #32 NEW cov: 12504 ft: 15121 corp: 22/355b lim: 35 exec/s: 32 rss: 74Mb L: 12/34 MS: 1 CMP- DE: "\377\377"- 00:06:30.668 [2024-11-26 19:33:05.930057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.668 [2024-11-26 19:33:05.930085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.668 [2024-11-26 19:33:05.930143] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.668 [2024-11-26 19:33:05.930156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.668 #33 NEW cov: 12504 ft: 15167 corp: 23/377b lim: 35 exec/s: 33 rss: 74Mb L: 22/34 MS: 1 ShuffleBytes- 00:06:30.927 #39 NEW cov: 12504 ft: 15172 corp: 24/389b lim: 35 exec/s: 39 rss: 74Mb L: 12/34 MS: 1 ChangeByte- 00:06:30.927 [2024-11-26 19:33:06.030454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.927 [2024-11-26 19:33:06.030481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.927 [2024-11-26 19:33:06.030542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.927 [2024-11-26 19:33:06.030556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.927 [2024-11-26 19:33:06.030620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.927 [2024-11-26 19:33:06.030637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.927 #40 NEW cov: 12504 ft: 15187 corp: 25/417b lim: 35 exec/s: 40 rss: 74Mb L: 28/34 MS: 1 InsertRepeatedBytes- 00:06:30.927 #41 NEW cov: 12504 ft: 15205 corp: 26/428b lim: 35 exec/s: 41 rss: 74Mb L: 11/34 MS: 1 CopyPart- 00:06:30.927 [2024-11-26 19:33:06.150531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.927 [2024-11-26 19:33:06.150559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.927 #47 NEW cov: 12504 ft: 15216 corp: 27/443b lim: 35 exec/s: 47 rss: 74Mb L: 15/34 MS: 1 InsertRepeatedBytes- 00:06:30.927 #48 NEW cov: 12504 ft: 15263 corp: 28/456b lim: 35 exec/s: 48 rss: 74Mb L: 13/34 MS: 1 ChangeBinInt- 00:06:30.927 [2024-11-26 19:33:06.230739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.927 [2024-11-26 19:33:06.230766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.187 #49 NEW cov: 12504 ft: 15287 corp: 29/475b lim: 35 exec/s: 49 rss: 74Mb L: 19/34 MS: 1 InsertRepeatedBytes- 00:06:31.187 [2024-11-26 19:33:06.270990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.271017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.187 [2024-11-26 19:33:06.271078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.271093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.187 #50 NEW cov: 12504 ft: 15300 corp: 30/497b lim: 35 exec/s: 50 rss: 74Mb L: 22/34 MS: 1 ShuffleBytes- 00:06:31.187 [2024-11-26 19:33:06.331292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.331319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.187 [2024-11-26 19:33:06.331380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.331395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.187 [2024-11-26 19:33:06.331453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000001e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.331467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.187 #51 NEW cov: 12504 ft: 15312 corp: 31/527b lim: 35 exec/s: 51 rss: 75Mb L: 30/34 MS: 1 CrossOver- 00:06:31.187 [2024-11-26 19:33:06.391121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.391148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.187 NEW_FUNC[1/1]: 0x46fc18 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:06:31.187 #52 NEW cov: 12526 ft: 15340 corp: 32/546b lim: 35 exec/s: 52 rss: 75Mb L: 19/34 MS: 1 CMP- DE: "\036\000"- 00:06:31.187 [2024-11-26 19:33:06.431587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.431619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.187 [2024-11-26 19:33:06.431697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.431715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.187 [2024-11-26 19:33:06.431778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.187 [2024-11-26 19:33:06.431792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.187 #53 NEW cov: 12526 ft: 15346 corp: 33/575b lim: 35 exec/s: 53 rss: 75Mb L: 29/34 MS: 1 ShuffleBytes- 00:06:31.447 #54 NEW cov: 12526 ft: 15354 corp: 34/585b lim: 35 exec/s: 54 rss: 75Mb L: 10/34 MS: 1 ChangeByte- 00:06:31.447 [2024-11-26 19:33:06.531738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.447 [2024-11-26 19:33:06.531765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.447 [2024-11-26 19:33:06.531828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.448 [2024-11-26 19:33:06.531843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.448 #55 NEW cov: 12526 ft: 15377 corp: 35/609b lim: 35 exec/s: 55 rss: 75Mb L: 24/34 MS: 1 CopyPart- 00:06:31.448 #56 NEW cov: 12526 ft: 15387 corp: 36/620b lim: 35 exec/s: 28 rss: 75Mb L: 11/34 MS: 1 CopyPart- 00:06:31.448 #56 DONE cov: 12526 ft: 15387 corp: 36/620b lim: 35 exec/s: 28 rss: 75Mb 00:06:31.448 ###### Recommended dictionary. ###### 00:06:31.448 "\000\221\325\357\235\304#\230" # Uses: 1 00:06:31.448 "\012\000\000\000" # Uses: 1 00:06:31.448 "\036\000" # Uses: 0 00:06:31.448 "\377\377" # Uses: 0 00:06:31.448 ###### End of recommended dictionary. ###### 00:06:31.448 Done 56 runs in 2 second(s) 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.448 19:33:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:31.707 [2024-11-26 19:33:06.758772] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:31.707 [2024-11-26 19:33:06.758858] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126262 ] 00:06:31.967 [2024-11-26 19:33:07.018028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.967 [2024-11-26 19:33:07.072737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.967 [2024-11-26 19:33:07.131779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.967 [2024-11-26 19:33:07.148130] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:31.967 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.967 INFO: Seed: 1784322503 00:06:31.967 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:31.967 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:31.967 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:31.967 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.967 #2 INITED exec/s: 0 rss: 65Mb 00:06:31.967 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.967 This may also happen if the target rejected all inputs we tried so far 00:06:31.967 [2024-11-26 19:33:07.219421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.967 [2024-11-26 19:33:07.219464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.967 [2024-11-26 19:33:07.219546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.967 [2024-11-26 19:33:07.219561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:31.967 [2024-11-26 19:33:07.219643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.967 [2024-11-26 19:33:07.219664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.536 NEW_FUNC[1/717]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:32.536 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.536 #32 NEW cov: 12342 ft: 12339 corp: 2/69b lim: 105 exec/s: 0 rss: 73Mb L: 68/68 MS: 5 ChangeByte-CrossOver-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:32.536 [2024-11-26 19:33:07.569165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.569219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.569346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.569377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.569507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.569534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.536 #33 NEW cov: 12457 ft: 12996 corp: 3/137b lim: 105 exec/s: 0 rss: 73Mb L: 68/68 MS: 1 ShuffleBytes- 00:06:32.536 [2024-11-26 19:33:07.639149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.639183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.639314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.639336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.639457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.639476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.536 #34 NEW cov: 12463 ft: 13274 corp: 4/201b lim: 105 exec/s: 0 rss: 73Mb L: 64/68 MS: 1 EraseBytes- 00:06:32.536 [2024-11-26 19:33:07.689291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.689323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.689450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.689472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.689594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.689618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.536 #35 NEW cov: 12548 ft: 13494 corp: 5/265b lim: 105 exec/s: 0 rss: 73Mb L: 64/68 MS: 1 ShuffleBytes- 00:06:32.536 [2024-11-26 19:33:07.759183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.759217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.536 #36 NEW cov: 12548 ft: 13987 corp: 6/299b lim: 105 exec/s: 0 rss: 73Mb L: 34/68 MS: 1 EraseBytes- 00:06:32.536 [2024-11-26 19:33:07.809649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.809681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.809800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.809819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.536 [2024-11-26 19:33:07.809940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-26 19:33:07.809959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.796 #37 NEW cov: 12548 ft: 14108 corp: 7/368b lim: 105 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 InsertByte- 00:06:32.796 [2024-11-26 19:33:07.869825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.869856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.869979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.870005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.870120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.870142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.796 #38 NEW cov: 12548 ft: 14239 corp: 8/436b lim: 105 exec/s: 0 rss: 73Mb L: 68/69 MS: 1 ChangeBit- 00:06:32.796 [2024-11-26 19:33:07.909772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.909802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.909870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.909895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.910014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.910034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.796 #39 NEW cov: 12548 ft: 14261 corp: 9/500b lim: 105 exec/s: 0 rss: 73Mb L: 64/69 MS: 1 ChangeBinInt- 00:06:32.796 [2024-11-26 19:33:07.970051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.970080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.970145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.970165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.796 [2024-11-26 19:33:07.970284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:07.970306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.796 #40 NEW cov: 12548 ft: 14362 corp: 10/564b lim: 105 exec/s: 0 rss: 73Mb L: 64/69 MS: 1 CopyPart- 00:06:32.796 [2024-11-26 19:33:08.009781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-26 19:33:08.009812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.796 #41 NEW cov: 12548 ft: 14394 corp: 11/586b lim: 105 exec/s: 0 rss: 73Mb L: 22/69 MS: 1 EraseBytes- 00:06:32.797 [2024-11-26 19:33:08.069990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.797 [2024-11-26 19:33:08.070021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.797 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:32.797 #42 NEW cov: 12571 ft: 14450 corp: 12/612b lim: 105 exec/s: 0 rss: 74Mb L: 26/69 MS: 1 InsertRepeatedBytes- 00:06:33.057 [2024-11-26 19:33:08.120465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.057 [2024-11-26 19:33:08.120494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.057 [2024-11-26 19:33:08.120586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.057 [2024-11-26 19:33:08.120611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.057 [2024-11-26 19:33:08.120745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.120768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.058 #43 NEW cov: 12571 ft: 14544 corp: 13/680b lim: 105 exec/s: 0 rss: 74Mb L: 68/69 MS: 1 ShuffleBytes- 00:06:33.058 [2024-11-26 19:33:08.190711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.190746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.190843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.190864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.190987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.191009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.058 #44 NEW cov: 12571 ft: 14621 corp: 14/744b lim: 105 exec/s: 44 rss: 74Mb L: 64/69 MS: 1 ShuffleBytes- 00:06:33.058 [2024-11-26 19:33:08.250822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.250851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.250936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.250961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.251082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.251105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.058 #45 NEW cov: 12571 ft: 14635 corp: 15/808b lim: 105 exec/s: 45 rss: 74Mb L: 64/69 MS: 1 ChangeByte- 00:06:33.058 [2024-11-26 19:33:08.311001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:973078528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.311035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.311135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.311153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.311274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.311296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.058 #54 NEW cov: 12571 ft: 14650 corp: 16/880b lim: 105 exec/s: 54 rss: 74Mb L: 72/72 MS: 4 ChangeBit-CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:33.058 [2024-11-26 19:33:08.351135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.351164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.351234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.351254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.058 [2024-11-26 19:33:08.351366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.058 [2024-11-26 19:33:08.351388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.318 #55 NEW cov: 12571 ft: 14665 corp: 17/944b lim: 105 exec/s: 55 rss: 74Mb L: 64/72 MS: 1 CopyPart- 00:06:33.318 [2024-11-26 19:33:08.391268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1125900041060352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.391297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.391365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.391386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.391503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.391522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.318 #56 NEW cov: 12571 ft: 14703 corp: 18/1013b lim: 105 exec/s: 56 rss: 74Mb L: 69/72 MS: 1 ChangeBit- 00:06:33.318 [2024-11-26 19:33:08.461204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.461233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.318 #57 NEW cov: 12571 ft: 14762 corp: 19/1039b lim: 105 exec/s: 57 rss: 74Mb L: 26/72 MS: 1 CrossOver- 00:06:33.318 [2024-11-26 19:33:08.531726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8796227239936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.531759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.531877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.531898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.532025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.532049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.318 #58 NEW cov: 12571 ft: 14770 corp: 20/1107b lim: 105 exec/s: 58 rss: 74Mb L: 68/72 MS: 1 ChangeBit- 00:06:33.318 [2024-11-26 19:33:08.582042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:35184506308608 len:144 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.582076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.582153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.582175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.582290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.582312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.318 [2024-11-26 19:33:08.582437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.318 [2024-11-26 19:33:08.582459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.578 #62 NEW cov: 12571 ft: 15311 corp: 21/1205b lim: 105 exec/s: 62 rss: 74Mb L: 98/98 MS: 4 CrossOver-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:33.578 [2024-11-26 19:33:08.641647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.641680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.578 #63 NEW cov: 12571 ft: 15375 corp: 22/1231b lim: 105 exec/s: 63 rss: 74Mb L: 26/98 MS: 1 ShuffleBytes- 00:06:33.578 [2024-11-26 19:33:08.712210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.712245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.712352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.712376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.712499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.712519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.578 #64 NEW cov: 12571 ft: 15388 corp: 23/1306b lim: 105 exec/s: 64 rss: 74Mb L: 75/98 MS: 1 CopyPart- 00:06:33.578 [2024-11-26 19:33:08.782648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.782681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.782754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.782775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.782896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.782916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.783039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.783059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.578 #65 NEW cov: 12571 ft: 15433 corp: 24/1396b lim: 105 exec/s: 65 rss: 74Mb L: 90/98 MS: 1 InsertRepeatedBytes- 00:06:33.578 [2024-11-26 19:33:08.852961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.852997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.853095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.853119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.853238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.853261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.578 [2024-11-26 19:33:08.853379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.578 [2024-11-26 19:33:08.853400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.578 #66 NEW cov: 12571 ft: 15487 corp: 25/1497b lim: 105 exec/s: 66 rss: 75Mb L: 101/101 MS: 1 InsertRepeatedBytes- 00:06:33.838 [2024-11-26 19:33:08.902518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:08.902551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.838 #67 NEW cov: 12571 ft: 15497 corp: 26/1527b lim: 105 exec/s: 67 rss: 75Mb L: 30/101 MS: 1 InsertRepeatedBytes- 00:06:33.838 [2024-11-26 19:33:08.952976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:08.953009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.838 [2024-11-26 19:33:08.953099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:08.953123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.838 [2024-11-26 19:33:08.953248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:40681930227712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:08.953271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.838 #68 NEW cov: 12571 ft: 15580 corp: 27/1596b lim: 105 exec/s: 68 rss: 75Mb L: 69/101 MS: 1 ChangeByte- 00:06:33.838 [2024-11-26 19:33:09.003299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:09.003332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.838 [2024-11-26 19:33:09.003420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:09.003443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.838 [2024-11-26 19:33:09.003556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1591483802067140630 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.838 [2024-11-26 19:33:09.003576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.838 [2024-11-26 19:33:09.003709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.839 [2024-11-26 19:33:09.003732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.839 #69 NEW cov: 12571 ft: 15630 corp: 28/1684b lim: 105 exec/s: 69 rss: 75Mb L: 88/101 MS: 1 InsertRepeatedBytes- 00:06:33.839 [2024-11-26 19:33:09.072960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.839 [2024-11-26 19:33:09.072992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.839 #70 NEW cov: 12571 ft: 15638 corp: 29/1714b lim: 105 exec/s: 70 rss: 75Mb L: 30/101 MS: 1 CMP- DE: "h\001\000\000"- 00:06:33.839 [2024-11-26 19:33:09.143198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.839 [2024-11-26 19:33:09.143231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.103 #71 NEW cov: 12571 ft: 15647 corp: 30/1744b lim: 105 exec/s: 71 rss: 75Mb L: 30/101 MS: 1 ShuffleBytes- 00:06:34.103 [2024-11-26 19:33:09.183675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134225920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.103 [2024-11-26 19:33:09.183703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.103 [2024-11-26 19:33:09.183804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.104 [2024-11-26 19:33:09.183824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.104 [2024-11-26 19:33:09.183953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.104 [2024-11-26 19:33:09.183975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.104 #72 NEW cov: 12571 ft: 15654 corp: 31/1819b lim: 105 exec/s: 36 rss: 75Mb L: 75/101 MS: 1 ChangeBit- 00:06:34.104 #72 DONE cov: 12571 ft: 15654 corp: 31/1819b lim: 105 exec/s: 36 rss: 75Mb 00:06:34.104 ###### Recommended dictionary. ###### 00:06:34.104 "h\001\000\000" # Uses: 0 00:06:34.104 ###### End of recommended dictionary. ###### 00:06:34.104 Done 72 runs in 2 second(s) 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:34.104 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.105 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.105 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.105 19:33:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:34.105 [2024-11-26 19:33:09.359555] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:34.105 [2024-11-26 19:33:09.359642] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126792 ] 00:06:34.366 [2024-11-26 19:33:09.621761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.626 [2024-11-26 19:33:09.680497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.626 [2024-11-26 19:33:09.739241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.626 [2024-11-26 19:33:09.755612] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:34.626 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.626 INFO: Seed: 98366262 00:06:34.626 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:34.626 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:34.626 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:34.626 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.626 #2 INITED exec/s: 0 rss: 65Mb 00:06:34.626 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.626 This may also happen if the target rejected all inputs we tried so far 00:06:34.626 [2024-11-26 19:33:09.820989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.626 [2024-11-26 19:33:09.821019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.626 [2024-11-26 19:33:09.821072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.626 [2024-11-26 19:33:09.821088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.885 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:34.885 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.885 #12 NEW cov: 12346 ft: 12347 corp: 2/52b lim: 120 exec/s: 0 rss: 73Mb L: 51/51 MS: 5 ShuffleBytes-ChangeByte-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:34.885 [2024-11-26 19:33:10.162177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.885 [2024-11-26 19:33:10.162267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.885 [2024-11-26 19:33:10.162373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.885 [2024-11-26 19:33:10.162415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.145 NEW_FUNC[1/1]: 0xfa89c8 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:315 00:06:35.145 #18 NEW cov: 12477 ft: 13074 corp: 3/103b lim: 120 exec/s: 0 rss: 73Mb L: 51/51 MS: 1 ShuffleBytes- 00:06:35.145 [2024-11-26 19:33:10.231937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.231969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.232007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.232023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.145 #29 NEW cov: 12483 ft: 13335 corp: 4/154b lim: 120 exec/s: 0 rss: 73Mb L: 51/51 MS: 1 ChangeBit- 00:06:35.145 [2024-11-26 19:33:10.272341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.272372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.272417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.272433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.272485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.272500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.272554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.272570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.145 #34 NEW cov: 12568 ft: 14004 corp: 5/252b lim: 120 exec/s: 0 rss: 73Mb L: 98/98 MS: 5 ChangeBit-ChangeByte-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:35.145 [2024-11-26 19:33:10.312141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.312169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.312203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.312219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.145 #35 NEW cov: 12568 ft: 14067 corp: 6/303b lim: 120 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ShuffleBytes- 00:06:35.145 [2024-11-26 19:33:10.372325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.372353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.372386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.372402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.145 #36 NEW cov: 12568 ft: 14198 corp: 7/354b lim: 120 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ChangeBit- 00:06:35.145 [2024-11-26 19:33:10.432473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.432501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.145 [2024-11-26 19:33:10.432535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.145 [2024-11-26 19:33:10.432551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.405 #37 NEW cov: 12568 ft: 14391 corp: 8/405b lim: 120 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ChangeBit- 00:06:35.405 [2024-11-26 19:33:10.472585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447688 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.472618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.405 [2024-11-26 19:33:10.472656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.472672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.405 #38 NEW cov: 12568 ft: 14460 corp: 9/457b lim: 120 exec/s: 0 rss: 73Mb L: 52/98 MS: 1 InsertByte- 00:06:35.405 [2024-11-26 19:33:10.512704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.512733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.405 [2024-11-26 19:33:10.512779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.512795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.405 #39 NEW cov: 12568 ft: 14513 corp: 10/525b lim: 120 exec/s: 0 rss: 73Mb L: 68/98 MS: 1 InsertRepeatedBytes- 00:06:35.405 [2024-11-26 19:33:10.552838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.552866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.405 [2024-11-26 19:33:10.552903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.552920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.405 #40 NEW cov: 12568 ft: 14591 corp: 11/593b lim: 120 exec/s: 0 rss: 73Mb L: 68/98 MS: 1 ChangeByte- 00:06:35.405 [2024-11-26 19:33:10.612965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.612992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.405 [2024-11-26 19:33:10.613028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.405 [2024-11-26 19:33:10.613044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.405 #41 NEW cov: 12568 ft: 14608 corp: 12/644b lim: 120 exec/s: 0 rss: 74Mb L: 51/98 MS: 1 ShuffleBytes- 00:06:35.405 [2024-11-26 19:33:10.673125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.406 [2024-11-26 19:33:10.673152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.406 [2024-11-26 19:33:10.673188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.406 [2024-11-26 19:33:10.673203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.406 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:35.406 #42 NEW cov: 12591 ft: 14642 corp: 13/695b lim: 120 exec/s: 0 rss: 74Mb L: 51/98 MS: 1 ChangeBit- 00:06:35.406 [2024-11-26 19:33:10.713250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.406 [2024-11-26 19:33:10.713278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.406 [2024-11-26 19:33:10.713314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.406 [2024-11-26 19:33:10.713330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.665 #43 NEW cov: 12591 ft: 14677 corp: 14/746b lim: 120 exec/s: 0 rss: 74Mb L: 51/98 MS: 1 CopyPart- 00:06:35.665 [2024-11-26 19:33:10.753682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.753711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.753759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.753776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.753827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.753844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.753897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3238002688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.753913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.666 #49 NEW cov: 12591 ft: 14739 corp: 15/862b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:06:35.666 [2024-11-26 19:33:10.813503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.813530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.813566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.813581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.666 #50 NEW cov: 12591 ft: 14779 corp: 16/922b lim: 120 exec/s: 50 rss: 74Mb L: 60/116 MS: 1 CrossOver- 00:06:35.666 [2024-11-26 19:33:10.853939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.853970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.854005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.854019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.854069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.854085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.854138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:194 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.854153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.666 #51 NEW cov: 12591 ft: 14795 corp: 17/1030b lim: 120 exec/s: 51 rss: 74Mb L: 108/116 MS: 1 EraseBytes- 00:06:35.666 [2024-11-26 19:33:10.914102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.914129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.914177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.914192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.914243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.914259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.914310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709518847 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.914325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.666 #52 NEW cov: 12591 ft: 14812 corp: 18/1131b lim: 120 exec/s: 52 rss: 74Mb L: 101/116 MS: 1 CopyPart- 00:06:35.666 [2024-11-26 19:33:10.974310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.974338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.974389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.974405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.666 [2024-11-26 19:33:10.974456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.666 [2024-11-26 19:33:10.974472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:10.974525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3238002688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:10.974546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.963 #53 NEW cov: 12591 ft: 14829 corp: 19/1247b lim: 120 exec/s: 53 rss: 74Mb L: 116/116 MS: 1 ChangeByte- 00:06:35.963 [2024-11-26 19:33:11.014100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.014127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.014164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.014180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 #59 NEW cov: 12591 ft: 14847 corp: 20/1299b lim: 120 exec/s: 59 rss: 74Mb L: 52/116 MS: 1 InsertByte- 00:06:35.963 [2024-11-26 19:33:11.054210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.054237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.054274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069415239679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.054290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 #60 NEW cov: 12591 ft: 14936 corp: 21/1350b lim: 120 exec/s: 60 rss: 74Mb L: 51/116 MS: 1 CMP- DE: "\000\000\000\011"- 00:06:35.963 [2024-11-26 19:33:11.094319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.094346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.094382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18376656808803557375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.094398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 #61 NEW cov: 12591 ft: 14938 corp: 22/1401b lim: 120 exec/s: 61 rss: 74Mb L: 51/116 MS: 1 ChangeBinInt- 00:06:35.963 [2024-11-26 19:33:11.154523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.154550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.154588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18376656808803557375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.154609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 #62 NEW cov: 12591 ft: 14998 corp: 23/1452b lim: 120 exec/s: 62 rss: 74Mb L: 51/116 MS: 1 ShuffleBytes- 00:06:35.963 [2024-11-26 19:33:11.214675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.214702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.214739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.214755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 #63 NEW cov: 12591 ft: 15041 corp: 24/1503b lim: 120 exec/s: 63 rss: 74Mb L: 51/116 MS: 1 ChangeBinInt- 00:06:35.963 [2024-11-26 19:33:11.255112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.255140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.255190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.255207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.255257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.255274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.963 [2024-11-26 19:33:11.255325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.963 [2024-11-26 19:33:11.255341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.325 #64 NEW cov: 12591 ft: 15046 corp: 25/1609b lim: 120 exec/s: 64 rss: 74Mb L: 106/116 MS: 1 InsertRepeatedBytes- 00:06:36.325 [2024-11-26 19:33:11.294919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462602809704447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.294947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.294984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.295000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.325 #65 NEW cov: 12591 ft: 15053 corp: 26/1661b lim: 120 exec/s: 65 rss: 74Mb L: 52/116 MS: 1 ChangeBinInt- 00:06:36.325 [2024-11-26 19:33:11.355355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.355383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.355431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.355446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.355497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5714873657160847183 len:20304 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.355513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.355563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.355578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.325 #66 NEW cov: 12591 ft: 15088 corp: 27/1780b lim: 120 exec/s: 66 rss: 74Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:06:36.325 [2024-11-26 19:33:11.415536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.415566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.415612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.415628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.415679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.415695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.415747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709518847 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.415764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.325 #67 NEW cov: 12591 ft: 15109 corp: 28/1881b lim: 120 exec/s: 67 rss: 74Mb L: 101/119 MS: 1 ChangeBit- 00:06:36.325 [2024-11-26 19:33:11.455327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.455355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.455389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.455406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.325 #68 NEW cov: 12591 ft: 15139 corp: 29/1938b lim: 120 exec/s: 68 rss: 74Mb L: 57/119 MS: 1 CrossOver- 00:06:36.325 [2024-11-26 19:33:11.495456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.495483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.325 [2024-11-26 19:33:11.495519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.495535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.325 #69 NEW cov: 12591 ft: 15162 corp: 30/1989b lim: 120 exec/s: 69 rss: 74Mb L: 51/119 MS: 1 ChangeBinInt- 00:06:36.325 [2024-11-26 19:33:11.555950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.325 [2024-11-26 19:33:11.555978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.326 [2024-11-26 19:33:11.556025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-11-26 19:33:11.556041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.326 [2024-11-26 19:33:11.556093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-11-26 19:33:11.556108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.326 [2024-11-26 19:33:11.556160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446463702539436031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-11-26 19:33:11.556179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.326 #70 NEW cov: 12591 ft: 15191 corp: 31/2103b lim: 120 exec/s: 70 rss: 75Mb L: 114/119 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:36.326 [2024-11-26 19:33:11.615836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-11-26 19:33:11.615867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.326 [2024-11-26 19:33:11.615919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-11-26 19:33:11.615933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.584 #71 NEW cov: 12591 ft: 15197 corp: 32/2155b lim: 120 exec/s: 71 rss: 75Mb L: 52/119 MS: 1 ShuffleBytes- 00:06:36.584 [2024-11-26 19:33:11.656073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.656102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.656143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.656158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.656210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.656227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.584 #72 NEW cov: 12591 ft: 15546 corp: 33/2246b lim: 120 exec/s: 72 rss: 75Mb L: 91/119 MS: 1 CopyPart- 00:06:36.584 [2024-11-26 19:33:11.696332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4076863232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.696361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.696411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.696427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.696479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.696495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.696547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.696562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.584 #73 NEW cov: 12591 ft: 15560 corp: 34/2362b lim: 120 exec/s: 73 rss: 75Mb L: 116/119 MS: 1 InsertRepeatedBytes- 00:06:36.584 [2024-11-26 19:33:11.756208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491447807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.756235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.756269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.756290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.584 #74 NEW cov: 12591 ft: 15608 corp: 35/2414b lim: 120 exec/s: 74 rss: 75Mb L: 52/119 MS: 1 InsertByte- 00:06:36.584 [2024-11-26 19:33:11.816693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073491443711 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.816721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.816768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.816785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.816835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5714873657160847183 len:20304 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.816852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.584 [2024-11-26 19:33:11.816906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.584 [2024-11-26 19:33:11.816922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.584 #75 NEW cov: 12591 ft: 15613 corp: 36/2533b lim: 120 exec/s: 37 rss: 75Mb L: 119/119 MS: 1 ChangeBit- 00:06:36.584 #75 DONE cov: 12591 ft: 15613 corp: 36/2533b lim: 120 exec/s: 37 rss: 75Mb 00:06:36.584 ###### Recommended dictionary. ###### 00:06:36.584 "\000\000\000\011" # Uses: 0 00:06:36.584 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:36.584 ###### End of recommended dictionary. ###### 00:06:36.584 Done 75 runs in 2 second(s) 00:06:36.842 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.843 19:33:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:36.843 [2024-11-26 19:33:12.014038] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:36.843 [2024-11-26 19:33:12.014108] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127264 ] 00:06:37.101 [2024-11-26 19:33:12.269867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.101 [2024-11-26 19:33:12.328096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.101 [2024-11-26 19:33:12.387320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.101 [2024-11-26 19:33:12.403664] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:37.361 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.361 INFO: Seed: 2744373333 00:06:37.361 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:37.361 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:37.361 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:37.361 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.361 #2 INITED exec/s: 0 rss: 65Mb 00:06:37.361 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:37.361 This may also happen if the target rejected all inputs we tried so far 00:06:37.361 [2024-11-26 19:33:12.452197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.361 [2024-11-26 19:33:12.452226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.361 [2024-11-26 19:33:12.452263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.361 [2024-11-26 19:33:12.452278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.620 NEW_FUNC[1/716]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:37.620 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.620 #17 NEW cov: 12308 ft: 12307 corp: 2/49b lim: 100 exec/s: 0 rss: 73Mb L: 48/48 MS: 5 InsertByte-ChangeByte-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:37.620 [2024-11-26 19:33:12.772991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.620 [2024-11-26 19:33:12.773025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.620 [2024-11-26 19:33:12.773081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.620 [2024-11-26 19:33:12.773096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.620 #23 NEW cov: 12421 ft: 12878 corp: 3/97b lim: 100 exec/s: 0 rss: 73Mb L: 48/48 MS: 1 ShuffleBytes- 00:06:37.620 [2024-11-26 19:33:12.833095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.620 [2024-11-26 19:33:12.833124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.620 [2024-11-26 19:33:12.833175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.620 [2024-11-26 19:33:12.833191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.620 #24 NEW cov: 12427 ft: 13106 corp: 4/145b lim: 100 exec/s: 0 rss: 73Mb L: 48/48 MS: 1 ShuffleBytes- 00:06:37.620 [2024-11-26 19:33:12.873174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.620 [2024-11-26 19:33:12.873204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.620 [2024-11-26 19:33:12.873247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.620 [2024-11-26 19:33:12.873261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.620 #25 NEW cov: 12512 ft: 13553 corp: 5/193b lim: 100 exec/s: 0 rss: 73Mb L: 48/48 MS: 1 ChangeBinInt- 00:06:37.881 [2024-11-26 19:33:12.933742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.881 [2024-11-26 19:33:12.933769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.933822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.881 [2024-11-26 19:33:12.933836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.933890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:37.881 [2024-11-26 19:33:12.933905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.933958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:37.881 [2024-11-26 19:33:12.933972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.934027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:37.881 [2024-11-26 19:33:12.934042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:37.881 #26 NEW cov: 12512 ft: 14027 corp: 6/293b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:06:37.881 [2024-11-26 19:33:12.973584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.881 [2024-11-26 19:33:12.973615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.973667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.881 [2024-11-26 19:33:12.973682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:12.973736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:37.881 [2024-11-26 19:33:12.973751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.881 #27 NEW cov: 12512 ft: 14326 corp: 7/361b lim: 100 exec/s: 0 rss: 73Mb L: 68/100 MS: 1 InsertRepeatedBytes- 00:06:37.881 [2024-11-26 19:33:13.013589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.881 [2024-11-26 19:33:13.013619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.013664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.881 [2024-11-26 19:33:13.013678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.881 #28 NEW cov: 12512 ft: 14413 corp: 8/410b lim: 100 exec/s: 0 rss: 73Mb L: 49/100 MS: 1 InsertByte- 00:06:37.881 [2024-11-26 19:33:13.074052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.881 [2024-11-26 19:33:13.074081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.074117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.881 [2024-11-26 19:33:13.074131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.074184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:37.881 [2024-11-26 19:33:13.074198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.074270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:37.881 [2024-11-26 19:33:13.074286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:37.881 #29 NEW cov: 12512 ft: 14444 corp: 9/490b lim: 100 exec/s: 0 rss: 73Mb L: 80/100 MS: 1 CrossOver- 00:06:37.881 [2024-11-26 19:33:13.134186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:37.881 [2024-11-26 19:33:13.134214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.134263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:37.881 [2024-11-26 19:33:13.134278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.134331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:37.881 [2024-11-26 19:33:13.134346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.881 [2024-11-26 19:33:13.134399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:37.881 [2024-11-26 19:33:13.134414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:37.881 #30 NEW cov: 12512 ft: 14528 corp: 10/584b lim: 100 exec/s: 0 rss: 73Mb L: 94/100 MS: 1 InsertRepeatedBytes- 00:06:38.140 [2024-11-26 19:33:13.194371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.140 [2024-11-26 19:33:13.194399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.194447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.140 [2024-11-26 19:33:13.194462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.194515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.140 [2024-11-26 19:33:13.194531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.194584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.140 [2024-11-26 19:33:13.194602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.140 #31 NEW cov: 12512 ft: 14572 corp: 11/678b lim: 100 exec/s: 0 rss: 74Mb L: 94/100 MS: 1 ChangeByte- 00:06:38.140 [2024-11-26 19:33:13.254289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.140 [2024-11-26 19:33:13.254317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.254360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.140 [2024-11-26 19:33:13.254378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.140 #32 NEW cov: 12512 ft: 14593 corp: 12/726b lim: 100 exec/s: 0 rss: 74Mb L: 48/100 MS: 1 ChangeBit- 00:06:38.140 [2024-11-26 19:33:13.314866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.140 [2024-11-26 19:33:13.314892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.314948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.140 [2024-11-26 19:33:13.314962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.315015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.140 [2024-11-26 19:33:13.315030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.315084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.140 [2024-11-26 19:33:13.315098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.140 [2024-11-26 19:33:13.315152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:38.140 [2024-11-26 19:33:13.315167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:38.141 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:38.141 #33 NEW cov: 12535 ft: 14627 corp: 13/826b lim: 100 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:06:38.141 [2024-11-26 19:33:13.374635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.141 [2024-11-26 19:33:13.374662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.141 [2024-11-26 19:33:13.374709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.141 [2024-11-26 19:33:13.374723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.141 #34 NEW cov: 12535 ft: 14683 corp: 14/875b lim: 100 exec/s: 0 rss: 74Mb L: 49/100 MS: 1 InsertByte- 00:06:38.141 [2024-11-26 19:33:13.414878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.141 [2024-11-26 19:33:13.414905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.141 [2024-11-26 19:33:13.414956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.141 [2024-11-26 19:33:13.414968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.141 [2024-11-26 19:33:13.415024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.141 [2024-11-26 19:33:13.415039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.400 #35 NEW cov: 12535 ft: 14692 corp: 15/944b lim: 100 exec/s: 35 rss: 74Mb L: 69/100 MS: 1 CopyPart- 00:06:38.400 [2024-11-26 19:33:13.475053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.400 [2024-11-26 19:33:13.475079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.475129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.400 [2024-11-26 19:33:13.475144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.475201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.400 [2024-11-26 19:33:13.475217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.400 #36 NEW cov: 12535 ft: 14713 corp: 16/1013b lim: 100 exec/s: 36 rss: 74Mb L: 69/100 MS: 1 ShuffleBytes- 00:06:38.400 [2024-11-26 19:33:13.535337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.400 [2024-11-26 19:33:13.535365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.535413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.400 [2024-11-26 19:33:13.535427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.535480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.400 [2024-11-26 19:33:13.535496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.535550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.400 [2024-11-26 19:33:13.535564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.400 #37 NEW cov: 12535 ft: 14726 corp: 17/1094b lim: 100 exec/s: 37 rss: 74Mb L: 81/100 MS: 1 CrossOver- 00:06:38.400 [2024-11-26 19:33:13.575182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.400 [2024-11-26 19:33:13.575209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.575256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.400 [2024-11-26 19:33:13.575269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.400 #38 NEW cov: 12535 ft: 14805 corp: 18/1142b lim: 100 exec/s: 38 rss: 74Mb L: 48/100 MS: 1 ChangeBinInt- 00:06:38.400 [2024-11-26 19:33:13.615544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.400 [2024-11-26 19:33:13.615571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.615629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.400 [2024-11-26 19:33:13.615643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.615698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.400 [2024-11-26 19:33:13.615712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.615763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.400 [2024-11-26 19:33:13.615777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.400 #39 NEW cov: 12535 ft: 14822 corp: 19/1222b lim: 100 exec/s: 39 rss: 74Mb L: 80/100 MS: 1 CopyPart- 00:06:38.400 [2024-11-26 19:33:13.675469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.400 [2024-11-26 19:33:13.675495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.400 [2024-11-26 19:33:13.675536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.400 [2024-11-26 19:33:13.675549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.400 #40 NEW cov: 12535 ft: 14849 corp: 20/1270b lim: 100 exec/s: 40 rss: 74Mb L: 48/100 MS: 1 ChangeBinInt- 00:06:38.659 [2024-11-26 19:33:13.715581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.715614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.715656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.715671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 #41 NEW cov: 12535 ft: 14902 corp: 21/1319b lim: 100 exec/s: 41 rss: 74Mb L: 49/100 MS: 1 ChangeBit- 00:06:38.659 [2024-11-26 19:33:13.755697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.755723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.755761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.755775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 #42 NEW cov: 12535 ft: 14975 corp: 22/1368b lim: 100 exec/s: 42 rss: 74Mb L: 49/100 MS: 1 ChangeByte- 00:06:38.659 [2024-11-26 19:33:13.795792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.795817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.795856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.795869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 #43 NEW cov: 12535 ft: 14980 corp: 23/1408b lim: 100 exec/s: 43 rss: 74Mb L: 40/100 MS: 1 EraseBytes- 00:06:38.659 [2024-11-26 19:33:13.835939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.835964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.836003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.836018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 #44 NEW cov: 12535 ft: 14987 corp: 24/1457b lim: 100 exec/s: 44 rss: 74Mb L: 49/100 MS: 1 InsertByte- 00:06:38.659 [2024-11-26 19:33:13.896134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.896160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.896199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.896214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 #45 NEW cov: 12535 ft: 14997 corp: 25/1513b lim: 100 exec/s: 45 rss: 74Mb L: 56/100 MS: 1 CrossOver- 00:06:38.659 [2024-11-26 19:33:13.936345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.659 [2024-11-26 19:33:13.936372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.936409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.659 [2024-11-26 19:33:13.936424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.659 [2024-11-26 19:33:13.936481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.659 [2024-11-26 19:33:13.936512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.659 #46 NEW cov: 12535 ft: 15020 corp: 26/1582b lim: 100 exec/s: 46 rss: 74Mb L: 69/100 MS: 1 ChangeBinInt- 00:06:38.918 [2024-11-26 19:33:13.976352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.918 [2024-11-26 19:33:13.976379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:13.976424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.918 [2024-11-26 19:33:13.976439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 #47 NEW cov: 12535 ft: 15024 corp: 27/1632b lim: 100 exec/s: 47 rss: 74Mb L: 50/100 MS: 1 CopyPart- 00:06:38.918 [2024-11-26 19:33:14.016450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.918 [2024-11-26 19:33:14.016476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.016512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.918 [2024-11-26 19:33:14.016528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 #48 NEW cov: 12535 ft: 15060 corp: 28/1674b lim: 100 exec/s: 48 rss: 75Mb L: 42/100 MS: 1 EraseBytes- 00:06:38.918 [2024-11-26 19:33:14.076783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.918 [2024-11-26 19:33:14.076809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.076859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.918 [2024-11-26 19:33:14.076871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.076925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.918 [2024-11-26 19:33:14.076939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.918 #49 NEW cov: 12535 ft: 15099 corp: 29/1743b lim: 100 exec/s: 49 rss: 75Mb L: 69/100 MS: 1 CopyPart- 00:06:38.918 [2024-11-26 19:33:14.137200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.918 [2024-11-26 19:33:14.137228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.137285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.918 [2024-11-26 19:33:14.137299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.137352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.918 [2024-11-26 19:33:14.137366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.137420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.918 [2024-11-26 19:33:14.137435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.137493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:38.918 [2024-11-26 19:33:14.137510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:38.918 #50 NEW cov: 12535 ft: 15116 corp: 30/1843b lim: 100 exec/s: 50 rss: 75Mb L: 100/100 MS: 1 ChangeBinInt- 00:06:38.918 [2024-11-26 19:33:14.177295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:38.918 [2024-11-26 19:33:14.177322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.177377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:38.918 [2024-11-26 19:33:14.177392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.177446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:38.918 [2024-11-26 19:33:14.177461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.177516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:38.918 [2024-11-26 19:33:14.177531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.918 [2024-11-26 19:33:14.177587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:38.918 [2024-11-26 19:33:14.177607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:38.918 #51 NEW cov: 12535 ft: 15128 corp: 31/1943b lim: 100 exec/s: 51 rss: 75Mb L: 100/100 MS: 1 ChangeBit- 00:06:39.177 [2024-11-26 19:33:14.237114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.177 [2024-11-26 19:33:14.237139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.237178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:39.177 [2024-11-26 19:33:14.237191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.177 #52 NEW cov: 12535 ft: 15140 corp: 32/1993b lim: 100 exec/s: 52 rss: 75Mb L: 50/100 MS: 1 InsertByte- 00:06:39.177 [2024-11-26 19:33:14.277250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.177 [2024-11-26 19:33:14.277277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.277313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:39.177 [2024-11-26 19:33:14.277327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.177 #53 NEW cov: 12535 ft: 15175 corp: 33/2043b lim: 100 exec/s: 53 rss: 75Mb L: 50/100 MS: 1 ChangeBit- 00:06:39.177 [2024-11-26 19:33:14.337506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.177 [2024-11-26 19:33:14.337532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.337581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:39.177 [2024-11-26 19:33:14.337596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.337657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:39.177 [2024-11-26 19:33:14.337673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.177 #54 NEW cov: 12535 ft: 15178 corp: 34/2111b lim: 100 exec/s: 54 rss: 75Mb L: 68/100 MS: 1 ChangeByte- 00:06:39.177 [2024-11-26 19:33:14.377440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.177 [2024-11-26 19:33:14.377467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.377503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:39.177 [2024-11-26 19:33:14.377518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.177 #55 NEW cov: 12535 ft: 15202 corp: 35/2159b lim: 100 exec/s: 55 rss: 75Mb L: 48/100 MS: 1 CopyPart- 00:06:39.177 [2024-11-26 19:33:14.417796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.177 [2024-11-26 19:33:14.417823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.417879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:39.177 [2024-11-26 19:33:14.417893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.417949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:39.177 [2024-11-26 19:33:14.417965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.177 [2024-11-26 19:33:14.418020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:39.177 [2024-11-26 19:33:14.418035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.177 #56 NEW cov: 12535 ft: 15222 corp: 36/2247b lim: 100 exec/s: 28 rss: 75Mb L: 88/100 MS: 1 CrossOver- 00:06:39.177 #56 DONE cov: 12535 ft: 15222 corp: 36/2247b lim: 100 exec/s: 28 rss: 75Mb 00:06:39.177 Done 56 runs in 2 second(s) 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.436 19:33:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:39.436 [2024-11-26 19:33:14.612249] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:39.436 [2024-11-26 19:33:14.612316] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127621 ] 00:06:39.696 [2024-11-26 19:33:14.878380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.696 [2024-11-26 19:33:14.931902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.696 [2024-11-26 19:33:14.990949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.955 [2024-11-26 19:33:15.007304] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:39.955 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.955 INFO: Seed: 1055398893 00:06:39.955 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:39.955 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:39.955 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:39.955 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.955 #2 INITED exec/s: 0 rss: 65Mb 00:06:39.955 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.955 This may also happen if the target rejected all inputs we tried so far 00:06:39.955 [2024-11-26 19:33:15.052728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:39.955 [2024-11-26 19:33:15.052771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.955 [2024-11-26 19:33:15.052806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:39.955 [2024-11-26 19:33:15.052823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.955 [2024-11-26 19:33:15.052873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:39.955 [2024-11-26 19:33:15.052889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.955 [2024-11-26 19:33:15.052940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:39.955 [2024-11-26 19:33:15.052956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.214 NEW_FUNC[1/716]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:40.214 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.214 #6 NEW cov: 12286 ft: 12285 corp: 2/46b lim: 50 exec/s: 0 rss: 73Mb L: 45/45 MS: 4 CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:40.214 [2024-11-26 19:33:15.393289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:06:40.214 [2024-11-26 19:33:15.393323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.214 #17 NEW cov: 12399 ft: 13172 corp: 3/64b lim: 50 exec/s: 0 rss: 73Mb L: 18/45 MS: 1 InsertRepeatedBytes- 00:06:40.214 [2024-11-26 19:33:15.433381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:06:40.214 [2024-11-26 19:33:15.433412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.214 #18 NEW cov: 12405 ft: 13356 corp: 4/82b lim: 50 exec/s: 0 rss: 73Mb L: 18/45 MS: 1 ShuffleBytes- 00:06:40.214 [2024-11-26 19:33:15.493863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.214 [2024-11-26 19:33:15.493890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.214 [2024-11-26 19:33:15.493937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:40.214 [2024-11-26 19:33:15.493952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.214 [2024-11-26 19:33:15.494005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.214 [2024-11-26 19:33:15.494020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.214 [2024-11-26 19:33:15.494073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.215 [2024-11-26 19:33:15.494087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.473 #19 NEW cov: 12490 ft: 13580 corp: 5/129b lim: 50 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 CrossOver- 00:06:40.473 [2024-11-26 19:33:15.553686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772161 len:1 00:06:40.473 [2024-11-26 19:33:15.553715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.473 #20 NEW cov: 12490 ft: 13814 corp: 6/147b lim: 50 exec/s: 0 rss: 73Mb L: 18/47 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.473 [2024-11-26 19:33:15.613950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:40.473 [2024-11-26 19:33:15.613977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.473 [2024-11-26 19:33:15.614013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:40.473 [2024-11-26 19:33:15.614029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.473 #22 NEW cov: 12490 ft: 14176 corp: 7/175b lim: 50 exec/s: 0 rss: 73Mb L: 28/47 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:40.473 [2024-11-26 19:33:15.654056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:40.473 [2024-11-26 19:33:15.654088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.473 [2024-11-26 19:33:15.654129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:70368744177664 len:1 00:06:40.473 [2024-11-26 19:33:15.654146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.473 #23 NEW cov: 12490 ft: 14261 corp: 8/203b lim: 50 exec/s: 0 rss: 73Mb L: 28/47 MS: 1 ChangeBit- 00:06:40.473 [2024-11-26 19:33:15.714128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17353936713696436287 len:11 00:06:40.473 [2024-11-26 19:33:15.714156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.473 #25 NEW cov: 12490 ft: 14304 corp: 9/213b lim: 50 exec/s: 0 rss: 73Mb L: 10/47 MS: 2 CMP-InsertByte- DE: "\372|`?\360\325\221\000"- 00:06:40.473 [2024-11-26 19:33:15.754233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4194369536 len:11 00:06:40.473 [2024-11-26 19:33:15.754265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 #26 NEW cov: 12490 ft: 14400 corp: 10/223b lim: 50 exec/s: 0 rss: 74Mb L: 10/47 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.733 [2024-11-26 19:33:15.814388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16777216 len:11 00:06:40.733 [2024-11-26 19:33:15.814416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 #27 NEW cov: 12490 ft: 14484 corp: 11/233b lim: 50 exec/s: 0 rss: 74Mb L: 10/47 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.733 [2024-11-26 19:33:15.854833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.733 [2024-11-26 19:33:15.854862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.854901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.854916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.854966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.854984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.855038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.855053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.733 #28 NEW cov: 12490 ft: 14517 corp: 12/281b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:06:40.733 [2024-11-26 19:33:15.914771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:24577 00:06:40.733 [2024-11-26 19:33:15.914800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.914842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:40.733 [2024-11-26 19:33:15.914858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.733 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:40.733 #29 NEW cov: 12513 ft: 14527 corp: 13/309b lim: 50 exec/s: 0 rss: 74Mb L: 28/48 MS: 1 ChangeByte- 00:06:40.733 [2024-11-26 19:33:15.955163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.733 [2024-11-26 19:33:15.955192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.955241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2560098560 len:1 00:06:40.733 [2024-11-26 19:33:15.955257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.955314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.955330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.955384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.955404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.733 #30 NEW cov: 12513 ft: 14547 corp: 14/354b lim: 50 exec/s: 0 rss: 74Mb L: 45/48 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.733 [2024-11-26 19:33:15.995222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.733 [2024-11-26 19:33:15.995251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.995292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10980225147668568216 len:39065 00:06:40.733 [2024-11-26 19:33:15.995307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.995361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.995377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:15.995433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:15.995448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.733 #36 NEW cov: 12513 ft: 14574 corp: 15/401b lim: 50 exec/s: 0 rss: 74Mb L: 47/48 MS: 1 ChangeByte- 00:06:40.733 [2024-11-26 19:33:16.035360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.733 [2024-11-26 19:33:16.035388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:16.035434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:16.035450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:16.035505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:16.035521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.733 [2024-11-26 19:33:16.035574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.733 [2024-11-26 19:33:16.035589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.993 #37 NEW cov: 12513 ft: 14597 corp: 16/450b lim: 50 exec/s: 37 rss: 74Mb L: 49/49 MS: 1 InsertByte- 00:06:40.993 [2024-11-26 19:33:16.095281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:06:40.993 [2024-11-26 19:33:16.095308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.095342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294967296 len:1 00:06:40.993 [2024-11-26 19:33:16.095358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.993 #38 NEW cov: 12513 ft: 14614 corp: 17/476b lim: 50 exec/s: 38 rss: 74Mb L: 26/49 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.993 [2024-11-26 19:33:16.135392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:40.993 [2024-11-26 19:33:16.135420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.135478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4539628699267366912 len:1 00:06:40.993 [2024-11-26 19:33:16.135495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.993 #39 NEW cov: 12513 ft: 14639 corp: 18/505b lim: 50 exec/s: 39 rss: 74Mb L: 29/49 MS: 1 InsertByte- 00:06:40.993 [2024-11-26 19:33:16.195464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1092807368418566080 len:11 00:06:40.993 [2024-11-26 19:33:16.195494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.993 #40 NEW cov: 12513 ft: 14664 corp: 19/515b lim: 50 exec/s: 40 rss: 74Mb L: 10/49 MS: 1 ChangeBinInt- 00:06:40.993 [2024-11-26 19:33:16.235703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18049407230293311488 len:61654 00:06:40.993 [2024-11-26 19:33:16.235732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.235769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:70371176873984 len:1 00:06:40.993 [2024-11-26 19:33:16.235786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.993 #41 NEW cov: 12513 ft: 14727 corp: 20/543b lim: 50 exec/s: 41 rss: 74Mb L: 28/49 MS: 1 PersAutoDict- DE: "\372|`?\360\325\221\000"- 00:06:40.993 [2024-11-26 19:33:16.276157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:40.993 [2024-11-26 19:33:16.276186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.276240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2560098560 len:1 00:06:40.993 [2024-11-26 19:33:16.276257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.276312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:40.993 [2024-11-26 19:33:16.276328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.276383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:40.993 [2024-11-26 19:33:16.276399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.993 [2024-11-26 19:33:16.276454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1051758291829263886 len:39065 00:06:40.993 [2024-11-26 19:33:16.276470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:41.253 #42 NEW cov: 12513 ft: 14760 corp: 21/593b lim: 50 exec/s: 42 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:41.253 [2024-11-26 19:33:16.336202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:41.253 [2024-11-26 19:33:16.336230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.336272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10980225147668568216 len:39065 00:06:41.253 [2024-11-26 19:33:16.336288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.336344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:41.253 [2024-11-26 19:33:16.336362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.336416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:41.253 [2024-11-26 19:33:16.336431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.253 #43 NEW cov: 12513 ft: 14771 corp: 22/640b lim: 50 exec/s: 43 rss: 74Mb L: 47/50 MS: 1 ShuffleBytes- 00:06:41.253 [2024-11-26 19:33:16.396377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:41.253 [2024-11-26 19:33:16.396405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.396453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:41.253 [2024-11-26 19:33:16.396469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.396521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:41.253 [2024-11-26 19:33:16.396537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.396589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995705674387200152 len:153 00:06:41.253 [2024-11-26 19:33:16.396610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.253 #44 NEW cov: 12513 ft: 14783 corp: 23/688b lim: 50 exec/s: 44 rss: 74Mb L: 48/50 MS: 1 CMP- DE: "\015\000"- 00:06:41.253 [2024-11-26 19:33:16.436152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167837696 len:1 00:06:41.253 [2024-11-26 19:33:16.436182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.253 #45 NEW cov: 12513 ft: 14848 corp: 24/706b lim: 50 exec/s: 45 rss: 74Mb L: 18/50 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:41.253 [2024-11-26 19:33:16.496344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4503599795142657 len:1 00:06:41.253 [2024-11-26 19:33:16.496372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.253 #46 NEW cov: 12513 ft: 14904 corp: 25/724b lim: 50 exec/s: 46 rss: 74Mb L: 18/50 MS: 1 ChangeBit- 00:06:41.253 [2024-11-26 19:33:16.536783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:41.253 [2024-11-26 19:33:16.536811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.536858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2560098560 len:1 00:06:41.253 [2024-11-26 19:33:16.536875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.536926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:40601 00:06:41.253 [2024-11-26 19:33:16.536942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.253 [2024-11-26 19:33:16.536997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:41.253 [2024-11-26 19:33:16.537020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.253 #47 NEW cov: 12513 ft: 14907 corp: 26/769b lim: 50 exec/s: 47 rss: 74Mb L: 45/50 MS: 1 ChangeBinInt- 00:06:41.512 [2024-11-26 19:33:16.576665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:41.512 [2024-11-26 19:33:16.576694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.512 [2024-11-26 19:33:16.576732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4539628699267366912 len:1 00:06:41.512 [2024-11-26 19:33:16.576748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.512 [2024-11-26 19:33:16.636821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:41.512 [2024-11-26 19:33:16.636849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.512 [2024-11-26 19:33:16.636882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4539628699267366912 len:7 00:06:41.512 [2024-11-26 19:33:16.636898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.512 #49 NEW cov: 12513 ft: 14920 corp: 27/798b lim: 50 exec/s: 49 rss: 74Mb L: 29/50 MS: 2 ChangeBit-ChangeBinInt- 00:06:41.512 [2024-11-26 19:33:16.676808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:56002347008 len:1 00:06:41.512 [2024-11-26 19:33:16.676836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.512 #50 NEW cov: 12513 ft: 14949 corp: 28/816b lim: 50 exec/s: 50 rss: 74Mb L: 18/50 MS: 1 PersAutoDict- DE: "\015\000"- 00:06:41.512 [2024-11-26 19:33:16.716956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16793600 len:11 00:06:41.512 [2024-11-26 19:33:16.716986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.512 #51 NEW cov: 12513 ft: 14988 corp: 29/826b lim: 50 exec/s: 51 rss: 75Mb L: 10/50 MS: 1 ChangeBit- 00:06:41.512 [2024-11-26 19:33:16.777449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13301549278890072216 len:39065 00:06:41.512 [2024-11-26 19:33:16.777477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.512 [2024-11-26 19:33:16.777527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995706271387654296 len:39065 00:06:41.512 [2024-11-26 19:33:16.777542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.512 [2024-11-26 19:33:16.777595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:41.512 [2024-11-26 19:33:16.777615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.513 [2024-11-26 19:33:16.777671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:41.513 [2024-11-26 19:33:16.777687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.513 #52 NEW cov: 12513 ft: 15008 corp: 30/871b lim: 50 exec/s: 52 rss: 75Mb L: 45/50 MS: 1 ChangeBit- 00:06:41.513 [2024-11-26 19:33:16.817257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16777226 len:11 00:06:41.513 [2024-11-26 19:33:16.817289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.772 #53 NEW cov: 12513 ft: 15022 corp: 31/881b lim: 50 exec/s: 53 rss: 75Mb L: 10/50 MS: 1 ChangeBinInt- 00:06:41.772 [2024-11-26 19:33:16.877666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:2 00:06:41.772 [2024-11-26 19:33:16.877693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.772 [2024-11-26 19:33:16.877728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:105553116266496 len:1 00:06:41.772 [2024-11-26 19:33:16.877744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.772 [2024-11-26 19:33:16.877797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:06:41.773 [2024-11-26 19:33:16.877813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.773 #54 NEW cov: 12513 ft: 15254 corp: 32/917b lim: 50 exec/s: 54 rss: 75Mb L: 36/50 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:06:41.773 [2024-11-26 19:33:16.937635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16793614 len:1 00:06:41.773 [2024-11-26 19:33:16.937664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.773 #55 NEW cov: 12513 ft: 15292 corp: 33/931b lim: 50 exec/s: 55 rss: 75Mb L: 14/50 MS: 1 CMP- DE: "\016\000\000\000"- 00:06:41.773 [2024-11-26 19:33:16.978156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:41.773 [2024-11-26 19:33:16.978184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:16.978240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2560098560 len:43777 00:06:41.773 [2024-11-26 19:33:16.978255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:16.978310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:41.773 [2024-11-26 19:33:16.978326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:16.978382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:41.773 [2024-11-26 19:33:16.978396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:16.978453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1051758291829263886 len:39065 00:06:41.773 [2024-11-26 19:33:16.978468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:41.773 #56 NEW cov: 12513 ft: 15293 corp: 34/981b lim: 50 exec/s: 56 rss: 75Mb L: 50/50 MS: 1 ChangeByte- 00:06:41.773 [2024-11-26 19:33:17.038179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995706269676378264 len:39065 00:06:41.773 [2024-11-26 19:33:17.038207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:17.038249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995538492785203352 len:48 00:06:41.773 [2024-11-26 19:33:17.038266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:17.038324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10995706271387654296 len:39065 00:06:41.773 [2024-11-26 19:33:17.038339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.773 [2024-11-26 19:33:17.038413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10995706271387654296 len:39065 00:06:41.773 [2024-11-26 19:33:17.038429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.773 #57 NEW cov: 12513 ft: 15301 corp: 35/1028b lim: 50 exec/s: 28 rss: 75Mb L: 47/50 MS: 1 ChangeBinInt- 00:06:41.773 #57 DONE cov: 12513 ft: 15301 corp: 35/1028b lim: 50 exec/s: 28 rss: 75Mb 00:06:41.773 ###### Recommended dictionary. ###### 00:06:41.773 "\001\000\000\000\000\000\000\000" # Uses: 5 00:06:41.773 "\372|`?\360\325\221\000" # Uses: 1 00:06:41.773 "\015\000" # Uses: 1 00:06:41.773 "\000\000\000\000\000\000\000\001" # Uses: 0 00:06:41.773 "\016\000\000\000" # Uses: 0 00:06:41.773 ###### End of recommended dictionary. ###### 00:06:41.773 Done 57 runs in 2 second(s) 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.032 19:33:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:42.032 [2024-11-26 19:33:17.210741] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:42.032 [2024-11-26 19:33:17.210811] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128159 ] 00:06:42.291 [2024-11-26 19:33:17.468746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.291 [2024-11-26 19:33:17.527215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.291 [2024-11-26 19:33:17.586149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.551 [2024-11-26 19:33:17.602520] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:42.551 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.551 INFO: Seed: 3650404852 00:06:42.551 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:42.551 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:42.551 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:42.551 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.551 #2 INITED exec/s: 0 rss: 65Mb 00:06:42.551 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.551 This may also happen if the target rejected all inputs we tried so far 00:06:42.551 [2024-11-26 19:33:17.648053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:42.551 [2024-11-26 19:33:17.648084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.551 [2024-11-26 19:33:17.648122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:42.551 [2024-11-26 19:33:17.648139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.551 [2024-11-26 19:33:17.648198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:42.551 [2024-11-26 19:33:17.648215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.810 NEW_FUNC[1/718]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:42.810 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.810 #10 NEW cov: 12344 ft: 12343 corp: 2/62b lim: 90 exec/s: 0 rss: 73Mb L: 61/61 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:06:42.810 [2024-11-26 19:33:17.978996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:42.810 [2024-11-26 19:33:17.979030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.810 [2024-11-26 19:33:17.979065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:42.811 [2024-11-26 19:33:17.979081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:17.979135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:42.811 [2024-11-26 19:33:17.979151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:17.979205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:42.811 [2024-11-26 19:33:17.979220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:42.811 #11 NEW cov: 12457 ft: 13123 corp: 3/139b lim: 90 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 CopyPart- 00:06:42.811 [2024-11-26 19:33:18.038754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:42.811 [2024-11-26 19:33:18.038783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:18.038822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:42.811 [2024-11-26 19:33:18.038837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.811 #12 NEW cov: 12463 ft: 13791 corp: 4/189b lim: 90 exec/s: 0 rss: 73Mb L: 50/77 MS: 1 EraseBytes- 00:06:42.811 [2024-11-26 19:33:18.079157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:42.811 [2024-11-26 19:33:18.079186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:18.079228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:42.811 [2024-11-26 19:33:18.079243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:18.079296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:42.811 [2024-11-26 19:33:18.079313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.811 [2024-11-26 19:33:18.079365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:42.811 [2024-11-26 19:33:18.079380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.070 #13 NEW cov: 12548 ft: 14011 corp: 5/266b lim: 90 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 ChangeBinInt- 00:06:43.070 [2024-11-26 19:33:18.138855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.070 [2024-11-26 19:33:18.138884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.070 #23 NEW cov: 12548 ft: 14848 corp: 6/288b lim: 90 exec/s: 0 rss: 73Mb L: 22/77 MS: 5 CrossOver-ChangeByte-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:06:43.070 [2024-11-26 19:33:18.179117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.070 [2024-11-26 19:33:18.179145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.179180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.070 [2024-11-26 19:33:18.179195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.070 #24 NEW cov: 12548 ft: 14948 corp: 7/331b lim: 90 exec/s: 0 rss: 73Mb L: 43/77 MS: 1 EraseBytes- 00:06:43.070 [2024-11-26 19:33:18.239430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.070 [2024-11-26 19:33:18.239457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.239500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.070 [2024-11-26 19:33:18.239516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.239571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.070 [2024-11-26 19:33:18.239588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.070 #25 NEW cov: 12548 ft: 15002 corp: 8/387b lim: 90 exec/s: 0 rss: 73Mb L: 56/77 MS: 1 EraseBytes- 00:06:43.070 [2024-11-26 19:33:18.299589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.070 [2024-11-26 19:33:18.299622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.299659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.070 [2024-11-26 19:33:18.299675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.299729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.070 [2024-11-26 19:33:18.299748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.070 #26 NEW cov: 12548 ft: 15019 corp: 9/443b lim: 90 exec/s: 0 rss: 74Mb L: 56/77 MS: 1 ShuffleBytes- 00:06:43.070 [2024-11-26 19:33:18.359908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.070 [2024-11-26 19:33:18.359935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.359988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.070 [2024-11-26 19:33:18.360002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.360056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.070 [2024-11-26 19:33:18.360072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.070 [2024-11-26 19:33:18.360126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.070 [2024-11-26 19:33:18.360141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.330 #27 NEW cov: 12548 ft: 15101 corp: 10/521b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 InsertByte- 00:06:43.330 [2024-11-26 19:33:18.399996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.330 [2024-11-26 19:33:18.400023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.400071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.330 [2024-11-26 19:33:18.400087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.400139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.330 [2024-11-26 19:33:18.400155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.400210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.330 [2024-11-26 19:33:18.400226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.330 #28 NEW cov: 12548 ft: 15138 corp: 11/593b lim: 90 exec/s: 0 rss: 74Mb L: 72/78 MS: 1 InsertRepeatedBytes- 00:06:43.330 [2024-11-26 19:33:18.440107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.330 [2024-11-26 19:33:18.440134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.440187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.330 [2024-11-26 19:33:18.440203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.440255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.330 [2024-11-26 19:33:18.440270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.440324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.330 [2024-11-26 19:33:18.440339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.330 #29 NEW cov: 12548 ft: 15195 corp: 12/670b lim: 90 exec/s: 0 rss: 74Mb L: 77/78 MS: 1 ShuffleBytes- 00:06:43.330 [2024-11-26 19:33:18.480202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.330 [2024-11-26 19:33:18.480230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.480279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.330 [2024-11-26 19:33:18.480295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.480348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.330 [2024-11-26 19:33:18.480364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.330 [2024-11-26 19:33:18.480417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.330 [2024-11-26 19:33:18.480432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.331 #35 NEW cov: 12548 ft: 15257 corp: 13/747b lim: 90 exec/s: 0 rss: 74Mb L: 77/78 MS: 1 ChangeBit- 00:06:43.331 [2024-11-26 19:33:18.520339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.331 [2024-11-26 19:33:18.520367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.520416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.331 [2024-11-26 19:33:18.520431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.520485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.331 [2024-11-26 19:33:18.520514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.520568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.331 [2024-11-26 19:33:18.520583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.331 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:43.331 #36 NEW cov: 12571 ft: 15315 corp: 14/824b lim: 90 exec/s: 0 rss: 74Mb L: 77/78 MS: 1 CrossOver- 00:06:43.331 [2024-11-26 19:33:18.560469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.331 [2024-11-26 19:33:18.560496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.560543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.331 [2024-11-26 19:33:18.560558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.560613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.331 [2024-11-26 19:33:18.560629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.560684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.331 [2024-11-26 19:33:18.560699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.331 #37 NEW cov: 12571 ft: 15417 corp: 15/901b lim: 90 exec/s: 0 rss: 74Mb L: 77/78 MS: 1 ChangeBinInt- 00:06:43.331 [2024-11-26 19:33:18.620680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.331 [2024-11-26 19:33:18.620709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.620750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.331 [2024-11-26 19:33:18.620765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.620821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.331 [2024-11-26 19:33:18.620837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.331 [2024-11-26 19:33:18.620894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.331 [2024-11-26 19:33:18.620910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.590 #38 NEW cov: 12571 ft: 15439 corp: 16/979b lim: 90 exec/s: 38 rss: 74Mb L: 78/78 MS: 1 InsertByte- 00:06:43.590 [2024-11-26 19:33:18.680676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.590 [2024-11-26 19:33:18.680703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.680740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.590 [2024-11-26 19:33:18.680756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.680811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.590 [2024-11-26 19:33:18.680828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.590 #39 NEW cov: 12571 ft: 15457 corp: 17/1037b lim: 90 exec/s: 39 rss: 74Mb L: 58/78 MS: 1 EraseBytes- 00:06:43.590 [2024-11-26 19:33:18.720776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.590 [2024-11-26 19:33:18.720803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.720840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.590 [2024-11-26 19:33:18.720855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.720912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.590 [2024-11-26 19:33:18.720927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.590 #40 NEW cov: 12571 ft: 15503 corp: 18/1093b lim: 90 exec/s: 40 rss: 74Mb L: 56/78 MS: 1 ChangeByte- 00:06:43.590 [2024-11-26 19:33:18.760909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.590 [2024-11-26 19:33:18.760935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.760971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.590 [2024-11-26 19:33:18.760986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.590 [2024-11-26 19:33:18.761040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.591 [2024-11-26 19:33:18.761055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.591 #41 NEW cov: 12571 ft: 15517 corp: 19/1151b lim: 90 exec/s: 41 rss: 74Mb L: 58/78 MS: 1 ShuffleBytes- 00:06:43.591 [2024-11-26 19:33:18.820958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.591 [2024-11-26 19:33:18.820985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.591 [2024-11-26 19:33:18.821022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.591 [2024-11-26 19:33:18.821038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.591 #42 NEW cov: 12571 ft: 15533 corp: 20/1201b lim: 90 exec/s: 42 rss: 74Mb L: 50/78 MS: 1 ChangeByte- 00:06:43.591 [2024-11-26 19:33:18.860867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.591 [2024-11-26 19:33:18.860896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.850 #43 NEW cov: 12571 ft: 15574 corp: 21/1231b lim: 90 exec/s: 43 rss: 74Mb L: 30/78 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:43.850 [2024-11-26 19:33:18.921545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.850 [2024-11-26 19:33:18.921573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.850 [2024-11-26 19:33:18.921626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.850 [2024-11-26 19:33:18.921643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.850 [2024-11-26 19:33:18.921695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.850 [2024-11-26 19:33:18.921711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.850 [2024-11-26 19:33:18.921765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.850 [2024-11-26 19:33:18.921781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.850 #44 NEW cov: 12571 ft: 15618 corp: 22/1317b lim: 90 exec/s: 44 rss: 74Mb L: 86/86 MS: 1 CopyPart- 00:06:43.850 [2024-11-26 19:33:18.981383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.850 [2024-11-26 19:33:18.981411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.850 [2024-11-26 19:33:18.981449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.851 [2024-11-26 19:33:18.981465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.851 #45 NEW cov: 12571 ft: 15633 corp: 23/1360b lim: 90 exec/s: 45 rss: 74Mb L: 43/86 MS: 1 ShuffleBytes- 00:06:43.851 [2024-11-26 19:33:19.041391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.851 [2024-11-26 19:33:19.041420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.851 #47 NEW cov: 12571 ft: 15653 corp: 24/1378b lim: 90 exec/s: 47 rss: 74Mb L: 18/86 MS: 2 EraseBytes-CopyPart- 00:06:43.851 [2024-11-26 19:33:19.081790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.851 [2024-11-26 19:33:19.081817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.851 [2024-11-26 19:33:19.081857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.851 [2024-11-26 19:33:19.081873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.851 [2024-11-26 19:33:19.081932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.851 [2024-11-26 19:33:19.081948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.851 #48 NEW cov: 12571 ft: 15671 corp: 25/1448b lim: 90 exec/s: 48 rss: 74Mb L: 70/86 MS: 1 InsertRepeatedBytes- 00:06:43.851 [2024-11-26 19:33:19.122065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:43.851 [2024-11-26 19:33:19.122093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.851 [2024-11-26 19:33:19.122137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:43.851 [2024-11-26 19:33:19.122152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.851 [2024-11-26 19:33:19.122206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:43.851 [2024-11-26 19:33:19.122223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.851 [2024-11-26 19:33:19.122277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:43.851 [2024-11-26 19:33:19.122291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.851 #49 NEW cov: 12571 ft: 15684 corp: 26/1525b lim: 90 exec/s: 49 rss: 74Mb L: 77/86 MS: 1 CopyPart- 00:06:44.110 [2024-11-26 19:33:19.162192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.162219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.162266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.110 [2024-11-26 19:33:19.162282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.162337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.110 [2024-11-26 19:33:19.162352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.162408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.110 [2024-11-26 19:33:19.162425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.110 #50 NEW cov: 12571 ft: 15693 corp: 27/1603b lim: 90 exec/s: 50 rss: 74Mb L: 78/86 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:44.110 [2024-11-26 19:33:19.222352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.222381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.222428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.110 [2024-11-26 19:33:19.222444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.222498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.110 [2024-11-26 19:33:19.222514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.222565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.110 [2024-11-26 19:33:19.222585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.110 #51 NEW cov: 12571 ft: 15708 corp: 28/1680b lim: 90 exec/s: 51 rss: 75Mb L: 77/86 MS: 1 ChangeByte- 00:06:44.110 [2024-11-26 19:33:19.262310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.262338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.262376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.110 [2024-11-26 19:33:19.262391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.262446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.110 [2024-11-26 19:33:19.262461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.110 #52 NEW cov: 12571 ft: 15719 corp: 29/1736b lim: 90 exec/s: 52 rss: 75Mb L: 56/86 MS: 1 CopyPart- 00:06:44.110 [2024-11-26 19:33:19.302419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.302447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.302482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.110 [2024-11-26 19:33:19.302497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.302549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.110 [2024-11-26 19:33:19.302564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.110 #53 NEW cov: 12571 ft: 15796 corp: 30/1794b lim: 90 exec/s: 53 rss: 75Mb L: 58/86 MS: 1 ChangeBit- 00:06:44.110 [2024-11-26 19:33:19.362343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.362372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 #56 NEW cov: 12571 ft: 15814 corp: 31/1817b lim: 90 exec/s: 56 rss: 75Mb L: 23/86 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:44.110 [2024-11-26 19:33:19.402840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.110 [2024-11-26 19:33:19.402868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.402915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.110 [2024-11-26 19:33:19.402931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.402984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.110 [2024-11-26 19:33:19.403000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.110 [2024-11-26 19:33:19.403052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.110 [2024-11-26 19:33:19.403069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.370 #57 NEW cov: 12571 ft: 15852 corp: 32/1903b lim: 90 exec/s: 57 rss: 75Mb L: 86/86 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:44.370 [2024-11-26 19:33:19.463130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.370 [2024-11-26 19:33:19.463161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.463199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.370 [2024-11-26 19:33:19.463215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.463272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.370 [2024-11-26 19:33:19.463288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.463344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.370 [2024-11-26 19:33:19.463358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.370 #58 NEW cov: 12571 ft: 15892 corp: 33/1982b lim: 90 exec/s: 58 rss: 75Mb L: 79/86 MS: 1 InsertByte- 00:06:44.370 [2024-11-26 19:33:19.523143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.370 [2024-11-26 19:33:19.523171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.523220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.370 [2024-11-26 19:33:19.523236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.523292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.370 [2024-11-26 19:33:19.523307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.523362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.370 [2024-11-26 19:33:19.523377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.370 #59 NEW cov: 12571 ft: 15906 corp: 34/2059b lim: 90 exec/s: 59 rss: 75Mb L: 77/86 MS: 1 ChangeBinInt- 00:06:44.370 [2024-11-26 19:33:19.563256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.370 [2024-11-26 19:33:19.563284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.563334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.370 [2024-11-26 19:33:19.563350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.563402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:44.370 [2024-11-26 19:33:19.563418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.370 [2024-11-26 19:33:19.563472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:44.370 [2024-11-26 19:33:19.563486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.370 #60 NEW cov: 12571 ft: 15953 corp: 35/2136b lim: 90 exec/s: 60 rss: 75Mb L: 77/86 MS: 1 ChangeByte- 00:06:44.371 [2024-11-26 19:33:19.623181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:44.371 [2024-11-26 19:33:19.623208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.371 [2024-11-26 19:33:19.623259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:44.371 [2024-11-26 19:33:19.623277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.371 #61 NEW cov: 12571 ft: 15962 corp: 36/2189b lim: 90 exec/s: 30 rss: 75Mb L: 53/86 MS: 1 EraseBytes- 00:06:44.371 #61 DONE cov: 12571 ft: 15962 corp: 36/2189b lim: 90 exec/s: 30 rss: 75Mb 00:06:44.371 ###### Recommended dictionary. ###### 00:06:44.371 "\377\377\377\377\377\377\377\377" # Uses: 2 00:06:44.371 ###### End of recommended dictionary. ###### 00:06:44.371 Done 61 runs in 2 second(s) 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.630 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.631 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.631 19:33:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:44.631 [2024-11-26 19:33:19.818612] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:44.631 [2024-11-26 19:33:19.818699] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128666 ] 00:06:44.890 [2024-11-26 19:33:20.088224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.890 [2024-11-26 19:33:20.138802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.890 [2024-11-26 19:33:20.198301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.149 [2024-11-26 19:33:20.214675] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:45.149 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.149 INFO: Seed: 1965427913 00:06:45.149 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:45.149 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:45.149 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:45.149 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.149 #2 INITED exec/s: 0 rss: 67Mb 00:06:45.149 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.149 This may also happen if the target rejected all inputs we tried so far 00:06:45.149 [2024-11-26 19:33:20.263047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.149 [2024-11-26 19:33:20.263080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.408 NEW_FUNC[1/718]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:45.408 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.408 #6 NEW cov: 12319 ft: 12304 corp: 2/11b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 4 InsertRepeatedBytes-CrossOver-ShuffleBytes-InsertByte- 00:06:45.408 [2024-11-26 19:33:20.583894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.408 [2024-11-26 19:33:20.583936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.408 #7 NEW cov: 12432 ft: 12859 corp: 3/21b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\""- 00:06:45.408 [2024-11-26 19:33:20.643975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.408 [2024-11-26 19:33:20.644005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.408 #8 NEW cov: 12438 ft: 13145 corp: 4/31b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:45.408 [2024-11-26 19:33:20.684033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.408 [2024-11-26 19:33:20.684062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.408 #9 NEW cov: 12523 ft: 13455 corp: 5/41b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.667 [2024-11-26 19:33:20.724139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.667 [2024-11-26 19:33:20.724167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.667 #10 NEW cov: 12523 ft: 13553 corp: 6/51b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.667 [2024-11-26 19:33:20.784737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.667 [2024-11-26 19:33:20.784764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.667 [2024-11-26 19:33:20.784812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.667 [2024-11-26 19:33:20.784827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.667 [2024-11-26 19:33:20.784881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:45.667 [2024-11-26 19:33:20.784898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.667 [2024-11-26 19:33:20.784951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:45.667 [2024-11-26 19:33:20.784965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.667 #11 NEW cov: 12523 ft: 14414 corp: 7/99b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:06:45.667 [2024-11-26 19:33:20.824551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.667 [2024-11-26 19:33:20.824578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.824627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.668 [2024-11-26 19:33:20.824644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.668 #12 NEW cov: 12523 ft: 14752 corp: 8/123b lim: 50 exec/s: 0 rss: 74Mb L: 24/48 MS: 1 InsertRepeatedBytes- 00:06:45.668 [2024-11-26 19:33:20.885040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.668 [2024-11-26 19:33:20.885068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.885115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.668 [2024-11-26 19:33:20.885131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.885183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:45.668 [2024-11-26 19:33:20.885198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.885250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:45.668 [2024-11-26 19:33:20.885265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.668 #15 NEW cov: 12523 ft: 14877 corp: 9/163b lim: 50 exec/s: 0 rss: 75Mb L: 40/48 MS: 3 ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\000\000\000\000\377\377\377\377"- 00:06:45.668 [2024-11-26 19:33:20.924983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.668 [2024-11-26 19:33:20.925011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.925048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.668 [2024-11-26 19:33:20.925064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.668 [2024-11-26 19:33:20.925118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:45.668 [2024-11-26 19:33:20.925134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.668 #16 NEW cov: 12523 ft: 15150 corp: 10/200b lim: 50 exec/s: 0 rss: 75Mb L: 37/48 MS: 1 EraseBytes- 00:06:45.927 [2024-11-26 19:33:20.985275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.927 [2024-11-26 19:33:20.985303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:20.985349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.927 [2024-11-26 19:33:20.985364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:20.985419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:45.927 [2024-11-26 19:33:20.985435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:20.985485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:45.927 [2024-11-26 19:33:20.985500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.927 #17 NEW cov: 12523 ft: 15222 corp: 11/248b lim: 50 exec/s: 0 rss: 75Mb L: 48/48 MS: 1 ChangeBinInt- 00:06:45.927 [2024-11-26 19:33:21.024963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.927 [2024-11-26 19:33:21.024991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.927 #19 NEW cov: 12523 ft: 15344 corp: 12/258b lim: 50 exec/s: 0 rss: 75Mb L: 10/48 MS: 2 EraseBytes-InsertByte- 00:06:45.927 [2024-11-26 19:33:21.085132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.927 [2024-11-26 19:33:21.085160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.927 #20 NEW cov: 12523 ft: 15373 corp: 13/268b lim: 50 exec/s: 0 rss: 75Mb L: 10/48 MS: 1 CopyPart- 00:06:45.927 [2024-11-26 19:33:21.125696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.927 [2024-11-26 19:33:21.125723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:21.125769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:45.927 [2024-11-26 19:33:21.125785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:21.125837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:45.927 [2024-11-26 19:33:21.125851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.927 [2024-11-26 19:33:21.125905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:45.927 [2024-11-26 19:33:21.125920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.927 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.927 #21 NEW cov: 12546 ft: 15409 corp: 14/308b lim: 50 exec/s: 0 rss: 75Mb L: 40/48 MS: 1 ShuffleBytes- 00:06:45.927 [2024-11-26 19:33:21.185421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:45.927 [2024-11-26 19:33:21.185450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.927 #22 NEW cov: 12546 ft: 15426 corp: 15/319b lim: 50 exec/s: 0 rss: 75Mb L: 11/48 MS: 1 CopyPart- 00:06:46.187 [2024-11-26 19:33:21.245606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.187 [2024-11-26 19:33:21.245635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.187 #23 NEW cov: 12546 ft: 15446 corp: 16/329b lim: 50 exec/s: 23 rss: 75Mb L: 10/48 MS: 1 CopyPart- 00:06:46.187 [2024-11-26 19:33:21.286139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.187 [2024-11-26 19:33:21.286166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.286211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.187 [2024-11-26 19:33:21.286227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.286281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.187 [2024-11-26 19:33:21.286297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.286349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.187 [2024-11-26 19:33:21.286364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.187 #24 NEW cov: 12546 ft: 15455 corp: 17/377b lim: 50 exec/s: 24 rss: 75Mb L: 48/48 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:46.187 [2024-11-26 19:33:21.325816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.187 [2024-11-26 19:33:21.325844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.187 #25 NEW cov: 12546 ft: 15482 corp: 18/387b lim: 50 exec/s: 25 rss: 75Mb L: 10/48 MS: 1 CrossOver- 00:06:46.187 [2024-11-26 19:33:21.385992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.187 [2024-11-26 19:33:21.386020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.187 #26 NEW cov: 12546 ft: 15495 corp: 19/397b lim: 50 exec/s: 26 rss: 75Mb L: 10/48 MS: 1 CopyPart- 00:06:46.187 [2024-11-26 19:33:21.446579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.187 [2024-11-26 19:33:21.446611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.446659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.187 [2024-11-26 19:33:21.446676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.446728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.187 [2024-11-26 19:33:21.446742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.187 [2024-11-26 19:33:21.446798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.187 [2024-11-26 19:33:21.446815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.187 #27 NEW cov: 12546 ft: 15504 corp: 20/445b lim: 50 exec/s: 27 rss: 75Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:06:46.446 [2024-11-26 19:33:21.506772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.446 [2024-11-26 19:33:21.506799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.506844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.446 [2024-11-26 19:33:21.506860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.506914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.446 [2024-11-26 19:33:21.506930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.506983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.446 [2024-11-26 19:33:21.506997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.446 #28 NEW cov: 12546 ft: 15508 corp: 21/485b lim: 50 exec/s: 28 rss: 75Mb L: 40/48 MS: 1 CopyPart- 00:06:46.446 [2024-11-26 19:33:21.546564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.446 [2024-11-26 19:33:21.546592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.546648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.446 [2024-11-26 19:33:21.546663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.446 #29 NEW cov: 12546 ft: 15524 corp: 22/506b lim: 50 exec/s: 29 rss: 76Mb L: 21/48 MS: 1 CrossOver- 00:06:46.446 [2024-11-26 19:33:21.606873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.446 [2024-11-26 19:33:21.606899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.606935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.446 [2024-11-26 19:33:21.606950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.607006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.446 [2024-11-26 19:33:21.607022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.446 #30 NEW cov: 12546 ft: 15542 corp: 23/538b lim: 50 exec/s: 30 rss: 76Mb L: 32/48 MS: 1 EraseBytes- 00:06:46.446 [2024-11-26 19:33:21.667047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.446 [2024-11-26 19:33:21.667074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.667120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.446 [2024-11-26 19:33:21.667136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.667189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.446 [2024-11-26 19:33:21.667204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.446 #31 NEW cov: 12546 ft: 15565 corp: 24/575b lim: 50 exec/s: 31 rss: 76Mb L: 37/48 MS: 1 ShuffleBytes- 00:06:46.446 [2024-11-26 19:33:21.727385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.446 [2024-11-26 19:33:21.727413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.727461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.446 [2024-11-26 19:33:21.727477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.727530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.446 [2024-11-26 19:33:21.727560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.446 [2024-11-26 19:33:21.727615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.446 [2024-11-26 19:33:21.727630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.447 #32 NEW cov: 12546 ft: 15581 corp: 25/618b lim: 50 exec/s: 32 rss: 76Mb L: 43/48 MS: 1 InsertRepeatedBytes- 00:06:46.706 [2024-11-26 19:33:21.767034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.767062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 #33 NEW cov: 12546 ft: 15632 corp: 26/628b lim: 50 exec/s: 33 rss: 76Mb L: 10/48 MS: 1 PersAutoDict- DE: "\377\377\377\""- 00:06:46.706 [2024-11-26 19:33:21.807159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.807186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 #39 NEW cov: 12546 ft: 15700 corp: 27/638b lim: 50 exec/s: 39 rss: 76Mb L: 10/48 MS: 1 CopyPart- 00:06:46.706 [2024-11-26 19:33:21.867814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.867840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.867886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.706 [2024-11-26 19:33:21.867902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.867957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.706 [2024-11-26 19:33:21.867973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.868026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.706 [2024-11-26 19:33:21.868042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.907885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.907913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.907963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.706 [2024-11-26 19:33:21.907979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.908033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.706 [2024-11-26 19:33:21.908048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.908101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.706 [2024-11-26 19:33:21.908116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.706 #41 NEW cov: 12546 ft: 15704 corp: 28/679b lim: 50 exec/s: 41 rss: 76Mb L: 41/48 MS: 2 InsertByte-PersAutoDict- DE: "\000\000\000\000\377\377\377\377"- 00:06:46.706 [2024-11-26 19:33:21.947962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.947990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.948035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.706 [2024-11-26 19:33:21.948049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.948104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.706 [2024-11-26 19:33:21.948120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.948173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.706 [2024-11-26 19:33:21.948188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.706 #42 NEW cov: 12546 ft: 15748 corp: 29/727b lim: 50 exec/s: 42 rss: 76Mb L: 48/48 MS: 1 ChangeBit- 00:06:46.706 [2024-11-26 19:33:21.988111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.706 [2024-11-26 19:33:21.988138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.988178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.706 [2024-11-26 19:33:21.988194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.988245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.706 [2024-11-26 19:33:21.988261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.706 [2024-11-26 19:33:21.988317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.706 [2024-11-26 19:33:21.988333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.706 #43 NEW cov: 12546 ft: 15763 corp: 30/771b lim: 50 exec/s: 43 rss: 76Mb L: 44/48 MS: 1 InsertRepeatedBytes- 00:06:46.965 [2024-11-26 19:33:22.027730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.965 [2024-11-26 19:33:22.027757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.965 #44 NEW cov: 12546 ft: 15775 corp: 31/782b lim: 50 exec/s: 44 rss: 76Mb L: 11/48 MS: 1 InsertByte- 00:06:46.965 [2024-11-26 19:33:22.067999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.965 [2024-11-26 19:33:22.068027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.068080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.965 [2024-11-26 19:33:22.068095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.965 #45 NEW cov: 12546 ft: 15785 corp: 32/806b lim: 50 exec/s: 45 rss: 76Mb L: 24/48 MS: 1 ChangeASCIIInt- 00:06:46.965 [2024-11-26 19:33:22.128192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.965 [2024-11-26 19:33:22.128219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.128255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.965 [2024-11-26 19:33:22.128270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.965 #46 NEW cov: 12546 ft: 15789 corp: 33/833b lim: 50 exec/s: 46 rss: 76Mb L: 27/48 MS: 1 CrossOver- 00:06:46.965 [2024-11-26 19:33:22.188337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.965 [2024-11-26 19:33:22.188365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.188401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.965 [2024-11-26 19:33:22.188416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.965 #47 NEW cov: 12546 ft: 15797 corp: 34/861b lim: 50 exec/s: 47 rss: 76Mb L: 28/48 MS: 1 CrossOver- 00:06:46.965 [2024-11-26 19:33:22.248791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:46.965 [2024-11-26 19:33:22.248819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.248861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:46.965 [2024-11-26 19:33:22.248876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.248932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:46.965 [2024-11-26 19:33:22.248964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.965 [2024-11-26 19:33:22.249022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:46.965 [2024-11-26 19:33:22.249038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.225 #48 NEW cov: 12546 ft: 15802 corp: 35/909b lim: 50 exec/s: 24 rss: 76Mb L: 48/48 MS: 1 PersAutoDict- DE: "\000\000\000\000\377\377\377\377"- 00:06:47.225 #48 DONE cov: 12546 ft: 15802 corp: 35/909b lim: 50 exec/s: 24 rss: 76Mb 00:06:47.225 ###### Recommended dictionary. ###### 00:06:47.225 "\377\377\377\"" # Uses: 1 00:06:47.225 "\000\000\000\000\377\377\377\377" # Uses: 2 00:06:47.225 "\000\000\000\000" # Uses: 0 00:06:47.225 ###### End of recommended dictionary. ###### 00:06:47.225 Done 48 runs in 2 second(s) 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.225 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.226 19:33:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:06:47.226 [2024-11-26 19:33:22.441899] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:47.226 [2024-11-26 19:33:22.441967] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128986 ] 00:06:47.485 [2024-11-26 19:33:22.704579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.485 [2024-11-26 19:33:22.757492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.744 [2024-11-26 19:33:22.816655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.744 [2024-11-26 19:33:22.832996] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:06:47.744 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.744 INFO: Seed: 290480153 00:06:47.744 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:47.744 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:47.744 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:47.744 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.744 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.744 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.744 This may also happen if the target rejected all inputs we tried so far 00:06:47.744 [2024-11-26 19:33:22.910708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:47.744 [2024-11-26 19:33:22.910752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.744 [2024-11-26 19:33:22.910832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:47.744 [2024-11-26 19:33:22.910847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.744 [2024-11-26 19:33:22.910916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:47.744 [2024-11-26 19:33:22.910933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.744 [2024-11-26 19:33:22.911003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:47.744 [2024-11-26 19:33:22.911019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.003 NEW_FUNC[1/718]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:06:48.003 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:48.003 #8 NEW cov: 12345 ft: 12346 corp: 2/72b lim: 85 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:06:48.003 [2024-11-26 19:33:23.250800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.003 [2024-11-26 19:33:23.250846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.003 [2024-11-26 19:33:23.250959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.003 [2024-11-26 19:33:23.250982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.003 [2024-11-26 19:33:23.251104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.003 [2024-11-26 19:33:23.251124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.003 [2024-11-26 19:33:23.251249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.003 [2024-11-26 19:33:23.251270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.003 #14 NEW cov: 12458 ft: 13147 corp: 3/143b lim: 85 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 ChangeBinInt- 00:06:48.262 [2024-11-26 19:33:23.320978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.262 [2024-11-26 19:33:23.321013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.262 [2024-11-26 19:33:23.321136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.262 [2024-11-26 19:33:23.321157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.321276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.263 [2024-11-26 19:33:23.321301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.321421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.263 [2024-11-26 19:33:23.321443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.263 #15 NEW cov: 12464 ft: 13359 corp: 4/215b lim: 85 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 InsertByte- 00:06:48.263 [2024-11-26 19:33:23.390831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.263 [2024-11-26 19:33:23.390866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.390993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.263 [2024-11-26 19:33:23.391020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.391143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.263 [2024-11-26 19:33:23.391162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.263 #16 NEW cov: 12549 ft: 13994 corp: 5/272b lim: 85 exec/s: 0 rss: 73Mb L: 57/72 MS: 1 InsertRepeatedBytes- 00:06:48.263 [2024-11-26 19:33:23.441297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.263 [2024-11-26 19:33:23.441330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.441454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.263 [2024-11-26 19:33:23.441476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.441601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.263 [2024-11-26 19:33:23.441624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.441742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.263 [2024-11-26 19:33:23.441766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.263 #17 NEW cov: 12549 ft: 14071 corp: 6/343b lim: 85 exec/s: 0 rss: 73Mb L: 71/72 MS: 1 ChangeByte- 00:06:48.263 [2024-11-26 19:33:23.491163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.263 [2024-11-26 19:33:23.491196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.491320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.263 [2024-11-26 19:33:23.491340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.491454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.263 [2024-11-26 19:33:23.491477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.263 #18 NEW cov: 12549 ft: 14105 corp: 7/400b lim: 85 exec/s: 0 rss: 73Mb L: 57/72 MS: 1 ChangeBinInt- 00:06:48.263 [2024-11-26 19:33:23.561234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.263 [2024-11-26 19:33:23.561264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.561355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.263 [2024-11-26 19:33:23.561374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.263 [2024-11-26 19:33:23.561492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.263 [2024-11-26 19:33:23.561511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.522 #19 NEW cov: 12549 ft: 14177 corp: 8/457b lim: 85 exec/s: 0 rss: 74Mb L: 57/72 MS: 1 ChangeBinInt- 00:06:48.522 [2024-11-26 19:33:23.631767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.522 [2024-11-26 19:33:23.631798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.631881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.522 [2024-11-26 19:33:23.631905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.632018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.522 [2024-11-26 19:33:23.632042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.632169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.522 [2024-11-26 19:33:23.632190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.522 #20 NEW cov: 12549 ft: 14191 corp: 9/529b lim: 85 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 InsertByte- 00:06:48.522 [2024-11-26 19:33:23.702062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.522 [2024-11-26 19:33:23.702093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.702177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.522 [2024-11-26 19:33:23.702195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.702307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.522 [2024-11-26 19:33:23.702328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.702454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.522 [2024-11-26 19:33:23.702477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.522 #21 NEW cov: 12549 ft: 14273 corp: 10/601b lim: 85 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 ShuffleBytes- 00:06:48.522 [2024-11-26 19:33:23.772262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.522 [2024-11-26 19:33:23.772297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.772386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.522 [2024-11-26 19:33:23.772406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.522 [2024-11-26 19:33:23.772531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.522 [2024-11-26 19:33:23.772553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.523 [2024-11-26 19:33:23.772672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.523 [2024-11-26 19:33:23.772696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.523 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:48.523 #22 NEW cov: 12572 ft: 14424 corp: 11/672b lim: 85 exec/s: 0 rss: 74Mb L: 71/72 MS: 1 ShuffleBytes- 00:06:48.523 [2024-11-26 19:33:23.822355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.523 [2024-11-26 19:33:23.822385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.523 [2024-11-26 19:33:23.822448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.523 [2024-11-26 19:33:23.822469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.523 [2024-11-26 19:33:23.822608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.523 [2024-11-26 19:33:23.822628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.523 [2024-11-26 19:33:23.822747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.523 [2024-11-26 19:33:23.822770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.782 #23 NEW cov: 12572 ft: 14462 corp: 12/744b lim: 85 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 ShuffleBytes- 00:06:48.782 [2024-11-26 19:33:23.872112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.782 [2024-11-26 19:33:23.872143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.872234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.782 [2024-11-26 19:33:23.872252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.872369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.782 [2024-11-26 19:33:23.872391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.782 #24 NEW cov: 12572 ft: 14530 corp: 13/801b lim: 85 exec/s: 24 rss: 74Mb L: 57/72 MS: 1 CrossOver- 00:06:48.782 [2024-11-26 19:33:23.942692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.782 [2024-11-26 19:33:23.942728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.942806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.782 [2024-11-26 19:33:23.942827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.942947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.782 [2024-11-26 19:33:23.942970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.943097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.782 [2024-11-26 19:33:23.943121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.782 #25 NEW cov: 12572 ft: 14559 corp: 14/873b lim: 85 exec/s: 25 rss: 74Mb L: 72/72 MS: 1 InsertByte- 00:06:48.782 [2024-11-26 19:33:23.992921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.782 [2024-11-26 19:33:23.992955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.993052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.782 [2024-11-26 19:33:23.993077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.993202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.782 [2024-11-26 19:33:23.993224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:23.993351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.782 [2024-11-26 19:33:23.993374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.782 #26 NEW cov: 12572 ft: 14595 corp: 15/944b lim: 85 exec/s: 26 rss: 74Mb L: 71/72 MS: 1 ChangeByte- 00:06:48.782 [2024-11-26 19:33:24.063194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:48.782 [2024-11-26 19:33:24.063225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:24.063325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:48.782 [2024-11-26 19:33:24.063346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:24.063456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:48.782 [2024-11-26 19:33:24.063482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.782 [2024-11-26 19:33:24.063602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:48.782 [2024-11-26 19:33:24.063636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.782 #27 NEW cov: 12572 ft: 14617 corp: 16/1015b lim: 85 exec/s: 27 rss: 74Mb L: 71/72 MS: 1 ShuffleBytes- 00:06:49.042 [2024-11-26 19:33:24.113315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.042 [2024-11-26 19:33:24.113350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.113461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.042 [2024-11-26 19:33:24.113489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.113613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.042 [2024-11-26 19:33:24.113634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.113755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.042 [2024-11-26 19:33:24.113777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.042 #28 NEW cov: 12572 ft: 14630 corp: 17/1087b lim: 85 exec/s: 28 rss: 74Mb L: 72/72 MS: 1 InsertByte- 00:06:49.042 [2024-11-26 19:33:24.163140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.042 [2024-11-26 19:33:24.163174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.163289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.042 [2024-11-26 19:33:24.163315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.163439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.042 [2024-11-26 19:33:24.163464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.042 #29 NEW cov: 12572 ft: 14674 corp: 18/1144b lim: 85 exec/s: 29 rss: 74Mb L: 57/72 MS: 1 ChangeBinInt- 00:06:49.042 [2024-11-26 19:33:24.213590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.042 [2024-11-26 19:33:24.213623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.213703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.042 [2024-11-26 19:33:24.213729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.213851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.042 [2024-11-26 19:33:24.213868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.213999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.042 [2024-11-26 19:33:24.214020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.042 #30 NEW cov: 12572 ft: 14690 corp: 19/1216b lim: 85 exec/s: 30 rss: 74Mb L: 72/72 MS: 1 InsertByte- 00:06:49.042 [2024-11-26 19:33:24.263834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.042 [2024-11-26 19:33:24.263865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.263978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.042 [2024-11-26 19:33:24.264003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.264119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.042 [2024-11-26 19:33:24.264143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.264267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.042 [2024-11-26 19:33:24.264290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.042 #31 NEW cov: 12572 ft: 14692 corp: 20/1292b lim: 85 exec/s: 31 rss: 74Mb L: 76/76 MS: 1 CrossOver- 00:06:49.042 [2024-11-26 19:33:24.313891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.042 [2024-11-26 19:33:24.313921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.314006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.042 [2024-11-26 19:33:24.314031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.314151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.042 [2024-11-26 19:33:24.314170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.042 [2024-11-26 19:33:24.314290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.042 [2024-11-26 19:33:24.314312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.042 #32 NEW cov: 12572 ft: 14721 corp: 21/1363b lim: 85 exec/s: 32 rss: 74Mb L: 71/76 MS: 1 ChangeBit- 00:06:49.301 [2024-11-26 19:33:24.364002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.301 [2024-11-26 19:33:24.364034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.364115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.301 [2024-11-26 19:33:24.364135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.364254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.301 [2024-11-26 19:33:24.364273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.364388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.301 [2024-11-26 19:33:24.364409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.301 #38 NEW cov: 12572 ft: 14734 corp: 22/1434b lim: 85 exec/s: 38 rss: 74Mb L: 71/76 MS: 1 CrossOver- 00:06:49.301 [2024-11-26 19:33:24.434294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.301 [2024-11-26 19:33:24.434326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.434423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.301 [2024-11-26 19:33:24.434445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.434560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.301 [2024-11-26 19:33:24.434585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.434708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.301 [2024-11-26 19:33:24.434732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.301 #39 NEW cov: 12572 ft: 14744 corp: 23/1506b lim: 85 exec/s: 39 rss: 74Mb L: 72/76 MS: 1 CopyPart- 00:06:49.301 [2024-11-26 19:33:24.504384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.301 [2024-11-26 19:33:24.504414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.504488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.301 [2024-11-26 19:33:24.504511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.504635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.301 [2024-11-26 19:33:24.504659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.301 [2024-11-26 19:33:24.504786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.302 [2024-11-26 19:33:24.504805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.302 #40 NEW cov: 12572 ft: 14782 corp: 24/1582b lim: 85 exec/s: 40 rss: 74Mb L: 76/76 MS: 1 CMP- DE: " \000\000\000"- 00:06:49.302 [2024-11-26 19:33:24.574705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.302 [2024-11-26 19:33:24.574734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.302 [2024-11-26 19:33:24.574807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.302 [2024-11-26 19:33:24.574829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.302 [2024-11-26 19:33:24.574939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.302 [2024-11-26 19:33:24.574964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.302 [2024-11-26 19:33:24.575083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.302 [2024-11-26 19:33:24.575104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.302 #41 NEW cov: 12572 ft: 14817 corp: 25/1655b lim: 85 exec/s: 41 rss: 74Mb L: 73/76 MS: 1 InsertByte- 00:06:49.561 [2024-11-26 19:33:24.624807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.561 [2024-11-26 19:33:24.624839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.624907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.561 [2024-11-26 19:33:24.624930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.625046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.561 [2024-11-26 19:33:24.625070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.625197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.561 [2024-11-26 19:33:24.625220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.561 #42 NEW cov: 12572 ft: 14837 corp: 26/1726b lim: 85 exec/s: 42 rss: 74Mb L: 71/76 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:06:49.561 [2024-11-26 19:33:24.674883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.561 [2024-11-26 19:33:24.674913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.674985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.561 [2024-11-26 19:33:24.675006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.675130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.561 [2024-11-26 19:33:24.675157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.675281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.561 [2024-11-26 19:33:24.675305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.561 #43 NEW cov: 12572 ft: 14866 corp: 27/1797b lim: 85 exec/s: 43 rss: 74Mb L: 71/76 MS: 1 ChangeByte- 00:06:49.561 [2024-11-26 19:33:24.725142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.561 [2024-11-26 19:33:24.725175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.725267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.561 [2024-11-26 19:33:24.725291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.561 [2024-11-26 19:33:24.725417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.562 [2024-11-26 19:33:24.725440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.725561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.562 [2024-11-26 19:33:24.725583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.562 #44 NEW cov: 12572 ft: 14919 corp: 28/1868b lim: 85 exec/s: 44 rss: 74Mb L: 71/76 MS: 1 ShuffleBytes- 00:06:49.562 [2024-11-26 19:33:24.775151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.562 [2024-11-26 19:33:24.775182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.775248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.562 [2024-11-26 19:33:24.775271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.775393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.562 [2024-11-26 19:33:24.775414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.775537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:49.562 [2024-11-26 19:33:24.775556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.562 #45 NEW cov: 12572 ft: 14927 corp: 29/1940b lim: 85 exec/s: 45 rss: 75Mb L: 72/76 MS: 1 ChangeBinInt- 00:06:49.562 [2024-11-26 19:33:24.845209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.562 [2024-11-26 19:33:24.845243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.845338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.562 [2024-11-26 19:33:24.845362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.562 [2024-11-26 19:33:24.845483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.562 [2024-11-26 19:33:24.845504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.562 #46 NEW cov: 12572 ft: 14947 corp: 30/2001b lim: 85 exec/s: 46 rss: 75Mb L: 61/76 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:06:49.821 [2024-11-26 19:33:24.895286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:49.821 [2024-11-26 19:33:24.895321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.821 [2024-11-26 19:33:24.895433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:49.822 [2024-11-26 19:33:24.895458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.822 [2024-11-26 19:33:24.895587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:49.822 [2024-11-26 19:33:24.895613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.822 #47 NEW cov: 12572 ft: 14954 corp: 31/2065b lim: 85 exec/s: 23 rss: 75Mb L: 64/76 MS: 1 InsertRepeatedBytes- 00:06:49.822 #47 DONE cov: 12572 ft: 14954 corp: 31/2065b lim: 85 exec/s: 23 rss: 75Mb 00:06:49.822 ###### Recommended dictionary. ###### 00:06:49.822 " \000\000\000" # Uses: 2 00:06:49.822 ###### End of recommended dictionary. ###### 00:06:49.822 Done 47 runs in 2 second(s) 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.822 19:33:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:06:49.822 [2024-11-26 19:33:25.072904] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:49.822 [2024-11-26 19:33:25.072967] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129514 ] 00:06:50.081 [2024-11-26 19:33:25.332825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.081 [2024-11-26 19:33:25.387625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.339 [2024-11-26 19:33:25.446937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.339 [2024-11-26 19:33:25.463310] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:06:50.339 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.339 INFO: Seed: 2919454022 00:06:50.339 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:50.339 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:50.339 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:50.339 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.339 #2 INITED exec/s: 0 rss: 66Mb 00:06:50.339 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.339 This may also happen if the target rejected all inputs we tried so far 00:06:50.339 [2024-11-26 19:33:25.534262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.339 [2024-11-26 19:33:25.534300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.339 [2024-11-26 19:33:25.534379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.339 [2024-11-26 19:33:25.534396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.339 [2024-11-26 19:33:25.534472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:50.339 [2024-11-26 19:33:25.534490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.599 NEW_FUNC[1/717]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:06:50.599 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.599 #9 NEW cov: 12278 ft: 12244 corp: 2/18b lim: 25 exec/s: 0 rss: 74Mb L: 17/17 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:50.599 [2024-11-26 19:33:25.884178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.599 [2024-11-26 19:33:25.884217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.599 [2024-11-26 19:33:25.884345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.599 [2024-11-26 19:33:25.884369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.877 #10 NEW cov: 12391 ft: 13184 corp: 3/29b lim: 25 exec/s: 0 rss: 74Mb L: 11/17 MS: 1 InsertRepeatedBytes- 00:06:50.877 [2024-11-26 19:33:25.934250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.877 [2024-11-26 19:33:25.934286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.877 [2024-11-26 19:33:25.934403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.877 [2024-11-26 19:33:25.934424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.877 #11 NEW cov: 12397 ft: 13478 corp: 4/42b lim: 25 exec/s: 0 rss: 74Mb L: 13/17 MS: 1 EraseBytes- 00:06:50.877 [2024-11-26 19:33:26.004391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.877 [2024-11-26 19:33:26.004426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.877 [2024-11-26 19:33:26.004571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.877 [2024-11-26 19:33:26.004602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.877 #17 NEW cov: 12482 ft: 13717 corp: 5/55b lim: 25 exec/s: 0 rss: 74Mb L: 13/17 MS: 1 ChangeByte- 00:06:50.877 [2024-11-26 19:33:26.074617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.877 [2024-11-26 19:33:26.074652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.877 [2024-11-26 19:33:26.074792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.877 [2024-11-26 19:33:26.074811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.877 #18 NEW cov: 12482 ft: 13807 corp: 6/67b lim: 25 exec/s: 0 rss: 74Mb L: 12/17 MS: 1 InsertByte- 00:06:50.877 [2024-11-26 19:33:26.144870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:50.877 [2024-11-26 19:33:26.144900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.877 [2024-11-26 19:33:26.144993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:50.877 [2024-11-26 19:33:26.145014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.877 #19 NEW cov: 12482 ft: 13968 corp: 7/78b lim: 25 exec/s: 0 rss: 74Mb L: 11/17 MS: 1 ChangeByte- 00:06:51.136 [2024-11-26 19:33:26.195139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.136 [2024-11-26 19:33:26.195168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.195263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.137 [2024-11-26 19:33:26.195287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.195411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.137 [2024-11-26 19:33:26.195432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.137 #20 NEW cov: 12482 ft: 14073 corp: 8/97b lim: 25 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CMP- DE: "\377\003"- 00:06:51.137 [2024-11-26 19:33:26.245108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.137 [2024-11-26 19:33:26.245141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.245239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.137 [2024-11-26 19:33:26.245262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.137 #21 NEW cov: 12482 ft: 14141 corp: 9/110b lim: 25 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 ShuffleBytes- 00:06:51.137 [2024-11-26 19:33:26.315476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.137 [2024-11-26 19:33:26.315505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.315601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.137 [2024-11-26 19:33:26.315626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.315752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.137 [2024-11-26 19:33:26.315774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.137 #23 NEW cov: 12482 ft: 14155 corp: 10/125b lim: 25 exec/s: 0 rss: 74Mb L: 15/19 MS: 2 CopyPart-CrossOver- 00:06:51.137 [2024-11-26 19:33:26.365965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.137 [2024-11-26 19:33:26.365999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.366086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.137 [2024-11-26 19:33:26.366107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.366222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.137 [2024-11-26 19:33:26.366247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.366368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.137 [2024-11-26 19:33:26.366389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.137 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:51.137 #24 NEW cov: 12505 ft: 14632 corp: 11/147b lim: 25 exec/s: 0 rss: 75Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:51.137 [2024-11-26 19:33:26.426097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.137 [2024-11-26 19:33:26.426124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.426207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.137 [2024-11-26 19:33:26.426232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.426344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.137 [2024-11-26 19:33:26.426364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.137 [2024-11-26 19:33:26.426494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.137 [2024-11-26 19:33:26.426516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.396 #25 NEW cov: 12505 ft: 14693 corp: 12/170b lim: 25 exec/s: 0 rss: 75Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:06:51.396 [2024-11-26 19:33:26.495863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.396 [2024-11-26 19:33:26.495896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.496000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.396 [2024-11-26 19:33:26.496020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.396 #26 NEW cov: 12505 ft: 14763 corp: 13/181b lim: 25 exec/s: 26 rss: 75Mb L: 11/23 MS: 1 EraseBytes- 00:06:51.396 [2024-11-26 19:33:26.546377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.396 [2024-11-26 19:33:26.546406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.546492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.396 [2024-11-26 19:33:26.546511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.546629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.396 [2024-11-26 19:33:26.546653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.546776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.396 [2024-11-26 19:33:26.546799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.396 #27 NEW cov: 12505 ft: 14846 corp: 14/205b lim: 25 exec/s: 27 rss: 75Mb L: 24/24 MS: 1 InsertByte- 00:06:51.396 [2024-11-26 19:33:26.616604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.396 [2024-11-26 19:33:26.616633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.616728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.396 [2024-11-26 19:33:26.616744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.616863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.396 [2024-11-26 19:33:26.616886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.396 [2024-11-26 19:33:26.616998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.396 [2024-11-26 19:33:26.617020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.396 #28 NEW cov: 12505 ft: 14889 corp: 15/229b lim: 25 exec/s: 28 rss: 75Mb L: 24/24 MS: 1 PersAutoDict- DE: "\377\003"- 00:06:51.396 [2024-11-26 19:33:26.686376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.396 [2024-11-26 19:33:26.686411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.397 [2024-11-26 19:33:26.686509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.397 [2024-11-26 19:33:26.686531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.655 #29 NEW cov: 12505 ft: 14900 corp: 16/241b lim: 25 exec/s: 29 rss: 75Mb L: 12/24 MS: 1 ShuffleBytes- 00:06:51.655 [2024-11-26 19:33:26.736950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.655 [2024-11-26 19:33:26.736978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.655 [2024-11-26 19:33:26.737065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.655 [2024-11-26 19:33:26.737086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.655 [2024-11-26 19:33:26.737207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.655 [2024-11-26 19:33:26.737230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.655 [2024-11-26 19:33:26.737349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.655 [2024-11-26 19:33:26.737370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.655 #30 NEW cov: 12505 ft: 14925 corp: 17/265b lim: 25 exec/s: 30 rss: 75Mb L: 24/24 MS: 1 InsertByte- 00:06:51.655 [2024-11-26 19:33:26.786683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.655 [2024-11-26 19:33:26.786716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.655 [2024-11-26 19:33:26.786829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.655 [2024-11-26 19:33:26.786852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.655 #31 NEW cov: 12505 ft: 14939 corp: 18/279b lim: 25 exec/s: 31 rss: 75Mb L: 14/24 MS: 1 InsertRepeatedBytes- 00:06:51.656 [2024-11-26 19:33:26.836956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.656 [2024-11-26 19:33:26.836989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.656 [2024-11-26 19:33:26.837094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.656 [2024-11-26 19:33:26.837114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.656 #32 NEW cov: 12505 ft: 14956 corp: 19/289b lim: 25 exec/s: 32 rss: 75Mb L: 10/24 MS: 1 InsertRepeatedBytes- 00:06:51.656 [2024-11-26 19:33:26.887125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.656 [2024-11-26 19:33:26.887156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.656 [2024-11-26 19:33:26.887268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.656 [2024-11-26 19:33:26.887291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.656 #33 NEW cov: 12505 ft: 14976 corp: 20/301b lim: 25 exec/s: 33 rss: 75Mb L: 12/24 MS: 1 ShuffleBytes- 00:06:51.656 [2024-11-26 19:33:26.937645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.656 [2024-11-26 19:33:26.937676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.656 [2024-11-26 19:33:26.937753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.656 [2024-11-26 19:33:26.937774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.656 [2024-11-26 19:33:26.937901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.656 [2024-11-26 19:33:26.937923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.915 #39 NEW cov: 12505 ft: 15015 corp: 21/316b lim: 25 exec/s: 39 rss: 75Mb L: 15/24 MS: 1 ChangeBit- 00:06:51.915 [2024-11-26 19:33:27.008100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.915 [2024-11-26 19:33:27.008130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.008251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.915 [2024-11-26 19:33:27.008276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.008394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.915 [2024-11-26 19:33:27.008419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.008534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.915 [2024-11-26 19:33:27.008556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.008678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:51.915 [2024-11-26 19:33:27.008700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.915 #40 NEW cov: 12505 ft: 15068 corp: 22/341b lim: 25 exec/s: 40 rss: 75Mb L: 25/25 MS: 1 InsertByte- 00:06:51.915 [2024-11-26 19:33:27.078148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.915 [2024-11-26 19:33:27.078182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.078269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.915 [2024-11-26 19:33:27.078290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.078408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.915 [2024-11-26 19:33:27.078430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.078551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.915 [2024-11-26 19:33:27.078575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.915 #41 NEW cov: 12505 ft: 15082 corp: 23/365b lim: 25 exec/s: 41 rss: 75Mb L: 24/25 MS: 1 PersAutoDict- DE: "\377\003"- 00:06:51.915 [2024-11-26 19:33:27.148347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.915 [2024-11-26 19:33:27.148380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.148459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.915 [2024-11-26 19:33:27.148483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.148604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.915 [2024-11-26 19:33:27.148632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.148750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.915 [2024-11-26 19:33:27.148774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.915 #42 NEW cov: 12505 ft: 15087 corp: 24/385b lim: 25 exec/s: 42 rss: 75Mb L: 20/25 MS: 1 EraseBytes- 00:06:51.915 [2024-11-26 19:33:27.198384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:51.915 [2024-11-26 19:33:27.198419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.198508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:51.915 [2024-11-26 19:33:27.198530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.198650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:51.915 [2024-11-26 19:33:27.198679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.915 [2024-11-26 19:33:27.198804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:51.915 [2024-11-26 19:33:27.198828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.174 #43 NEW cov: 12505 ft: 15100 corp: 25/408b lim: 25 exec/s: 43 rss: 75Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:06:52.174 [2024-11-26 19:33:27.268508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.174 [2024-11-26 19:33:27.268541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.268656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:52.174 [2024-11-26 19:33:27.268677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.268799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:52.174 [2024-11-26 19:33:27.268822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.174 #44 NEW cov: 12505 ft: 15128 corp: 26/423b lim: 25 exec/s: 44 rss: 76Mb L: 15/25 MS: 1 CrossOver- 00:06:52.174 [2024-11-26 19:33:27.338912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.174 [2024-11-26 19:33:27.338941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.339032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:52.174 [2024-11-26 19:33:27.339055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.339174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:52.174 [2024-11-26 19:33:27.339199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.339325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:52.174 [2024-11-26 19:33:27.339350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.174 #45 NEW cov: 12505 ft: 15135 corp: 27/446b lim: 25 exec/s: 45 rss: 76Mb L: 23/25 MS: 1 ChangeByte- 00:06:52.174 [2024-11-26 19:33:27.388654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.174 [2024-11-26 19:33:27.388688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.388805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:52.174 [2024-11-26 19:33:27.388828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.174 #46 NEW cov: 12505 ft: 15164 corp: 28/460b lim: 25 exec/s: 46 rss: 76Mb L: 14/25 MS: 1 CMP- DE: "\005\000\000\000\000\000\000\000"- 00:06:52.174 [2024-11-26 19:33:27.439188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.174 [2024-11-26 19:33:27.439218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.439305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:52.174 [2024-11-26 19:33:27.439332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.439458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:52.174 [2024-11-26 19:33:27.439480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.174 [2024-11-26 19:33:27.439604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:52.174 [2024-11-26 19:33:27.439625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.433 #47 NEW cov: 12505 ft: 15167 corp: 29/482b lim: 25 exec/s: 47 rss: 76Mb L: 22/25 MS: 1 CrossOver- 00:06:52.433 [2024-11-26 19:33:27.508959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.433 [2024-11-26 19:33:27.508991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.433 [2024-11-26 19:33:27.509094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:52.433 [2024-11-26 19:33:27.509116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.433 #48 NEW cov: 12505 ft: 15171 corp: 30/495b lim: 25 exec/s: 24 rss: 76Mb L: 13/25 MS: 1 ChangeBit- 00:06:52.433 #48 DONE cov: 12505 ft: 15171 corp: 30/495b lim: 25 exec/s: 24 rss: 76Mb 00:06:52.433 ###### Recommended dictionary. ###### 00:06:52.433 "\377\003" # Uses: 2 00:06:52.433 "\005\000\000\000\000\000\000\000" # Uses: 0 00:06:52.433 ###### End of recommended dictionary. ###### 00:06:52.433 Done 48 runs in 2 second(s) 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.433 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.434 19:33:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:06:52.434 [2024-11-26 19:33:27.682722] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:52.434 [2024-11-26 19:33:27.682784] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130047 ] 00:06:52.692 [2024-11-26 19:33:27.937670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.692 [2024-11-26 19:33:27.995139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.949 [2024-11-26 19:33:28.054075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.949 [2024-11-26 19:33:28.070425] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:06:52.949 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.949 INFO: Seed: 1233506750 00:06:52.949 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:52.949 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:52.949 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:52.949 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.949 #2 INITED exec/s: 0 rss: 65Mb 00:06:52.949 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.949 This may also happen if the target rejected all inputs we tried so far 00:06:52.949 [2024-11-26 19:33:28.126207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.949 [2024-11-26 19:33:28.126237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.949 [2024-11-26 19:33:28.126275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.949 [2024-11-26 19:33:28.126292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.949 [2024-11-26 19:33:28.126347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.949 [2024-11-26 19:33:28.126363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.949 [2024-11-26 19:33:28.126419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.949 [2024-11-26 19:33:28.126434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.209 NEW_FUNC[1/718]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:06:53.209 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.209 #7 NEW cov: 12350 ft: 12346 corp: 2/100b lim: 100 exec/s: 0 rss: 73Mb L: 99/99 MS: 5 CrossOver-InsertByte-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:53.209 [2024-11-26 19:33:28.466823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.466859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.209 [2024-11-26 19:33:28.466899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.466914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.209 [2024-11-26 19:33:28.466968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.466984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.209 #13 NEW cov: 12463 ft: 13306 corp: 3/169b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 InsertRepeatedBytes- 00:06:53.209 [2024-11-26 19:33:28.506846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.506876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.209 [2024-11-26 19:33:28.506915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.506930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.209 [2024-11-26 19:33:28.506984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.209 [2024-11-26 19:33:28.507000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.469 #19 NEW cov: 12469 ft: 13496 corp: 4/238b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 ChangeByte- 00:06:53.469 [2024-11-26 19:33:28.566883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:163363543842816 len:38037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.566914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.469 [2024-11-26 19:33:28.566954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10706345580035347604 len:38037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.566970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.469 #21 NEW cov: 12554 ft: 14177 corp: 5/297b lim: 100 exec/s: 0 rss: 73Mb L: 59/99 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:06:53.469 [2024-11-26 19:33:28.607093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.607121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.469 [2024-11-26 19:33:28.607167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.607183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.469 [2024-11-26 19:33:28.607237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.607254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.469 #22 NEW cov: 12554 ft: 14259 corp: 6/366b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 ShuffleBytes- 00:06:53.469 [2024-11-26 19:33:28.666981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5136152272472524615 len:18248 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.667009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.469 #24 NEW cov: 12554 ft: 15122 corp: 7/390b lim: 100 exec/s: 0 rss: 73Mb L: 24/99 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:53.469 [2024-11-26 19:33:28.707117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.707146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.469 #26 NEW cov: 12554 ft: 15244 corp: 8/429b lim: 100 exec/s: 0 rss: 73Mb L: 39/99 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:53.469 [2024-11-26 19:33:28.747209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.469 [2024-11-26 19:33:28.747238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.728 #27 NEW cov: 12554 ft: 15319 corp: 9/468b lim: 100 exec/s: 0 rss: 73Mb L: 39/99 MS: 1 ShuffleBytes- 00:06:53.728 [2024-11-26 19:33:28.807707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.807737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.807785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.807802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.807856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.807873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.728 #28 NEW cov: 12554 ft: 15368 corp: 10/545b lim: 100 exec/s: 0 rss: 73Mb L: 77/99 MS: 1 EraseBytes- 00:06:53.728 [2024-11-26 19:33:28.867872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.867902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.867938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.867954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.868008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.868025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.728 #34 NEW cov: 12554 ft: 15407 corp: 11/614b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 CrossOver- 00:06:53.728 [2024-11-26 19:33:28.907953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.907982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.908020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.908036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.728 [2024-11-26 19:33:28.908090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.728 [2024-11-26 19:33:28.908107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.729 #35 NEW cov: 12554 ft: 15469 corp: 12/683b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 ChangeBinInt- 00:06:53.729 [2024-11-26 19:33:28.947916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.947943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.729 [2024-11-26 19:33:28.947982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.948001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.729 #36 NEW cov: 12554 ft: 15494 corp: 13/730b lim: 100 exec/s: 0 rss: 73Mb L: 47/99 MS: 1 EraseBytes- 00:06:53.729 [2024-11-26 19:33:28.988289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.988318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.729 [2024-11-26 19:33:28.988366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.988382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.729 [2024-11-26 19:33:28.988435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743614148050943 len:38037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.988451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.729 [2024-11-26 19:33:28.988504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.729 [2024-11-26 19:33:28.988520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.729 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:53.729 #37 NEW cov: 12577 ft: 15608 corp: 14/815b lim: 100 exec/s: 0 rss: 73Mb L: 85/99 MS: 1 InsertRepeatedBytes- 00:06:53.988 [2024-11-26 19:33:29.048336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:4374 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.048363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.048400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.048417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.048468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.048483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.988 #38 NEW cov: 12577 ft: 15643 corp: 15/884b lim: 100 exec/s: 0 rss: 73Mb L: 69/99 MS: 1 ChangeBit- 00:06:53.988 [2024-11-26 19:33:29.088431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599606037 len:4374 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.088458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.088505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.088522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.088575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.088591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.988 #44 NEW cov: 12577 ft: 15664 corp: 16/953b lim: 100 exec/s: 44 rss: 74Mb L: 69/99 MS: 1 ChangeBit- 00:06:53.988 [2024-11-26 19:33:29.148587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.148620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.148669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.148685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.148739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.148756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.988 #45 NEW cov: 12577 ft: 15725 corp: 17/1030b lim: 100 exec/s: 45 rss: 74Mb L: 77/99 MS: 1 ShuffleBytes- 00:06:53.988 [2024-11-26 19:33:29.188727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.188754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.188800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599665386 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.188816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.188869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.188885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.988 #46 NEW cov: 12577 ft: 15802 corp: 18/1099b lim: 100 exec/s: 46 rss: 74Mb L: 69/99 MS: 1 ChangeBinInt- 00:06:53.988 [2024-11-26 19:33:29.248933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.248960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.248996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599665386 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.249012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.988 [2024-11-26 19:33:29.249065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.988 [2024-11-26 19:33:29.249081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.988 #47 NEW cov: 12577 ft: 15828 corp: 19/1168b lim: 100 exec/s: 47 rss: 74Mb L: 69/99 MS: 1 CopyPart- 00:06:54.247 [2024-11-26 19:33:29.308730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.308758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.247 #48 NEW cov: 12577 ft: 15854 corp: 20/1207b lim: 100 exec/s: 48 rss: 74Mb L: 39/99 MS: 1 ChangeByte- 00:06:54.247 [2024-11-26 19:33:29.349336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.349368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.349406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5562 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.349422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.349476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.349492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.349544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.349559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.247 #49 NEW cov: 12577 ft: 15868 corp: 21/1297b lim: 100 exec/s: 49 rss: 74Mb L: 90/99 MS: 1 CrossOver- 00:06:54.247 [2024-11-26 19:33:29.409339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.409366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.409403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.409419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.409476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.409492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.247 #50 NEW cov: 12577 ft: 15879 corp: 22/1374b lim: 100 exec/s: 50 rss: 74Mb L: 77/99 MS: 1 ChangeBit- 00:06:54.247 [2024-11-26 19:33:29.449605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.449650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.449701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.449718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.449771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.449787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.449845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.449859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.247 #51 NEW cov: 12577 ft: 15907 corp: 23/1472b lim: 100 exec/s: 51 rss: 74Mb L: 98/99 MS: 1 InsertRepeatedBytes- 00:06:54.247 [2024-11-26 19:33:29.509609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.509640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.509677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.509692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.247 [2024-11-26 19:33:29.509745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073692774399 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.247 [2024-11-26 19:33:29.509761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.247 #52 NEW cov: 12577 ft: 15914 corp: 24/1549b lim: 100 exec/s: 52 rss: 74Mb L: 77/99 MS: 1 ChangeBit- 00:06:54.506 [2024-11-26 19:33:29.569805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1534061803365270805 len:5394 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.569832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.569879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.569895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.569952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.569969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.507 #53 NEW cov: 12577 ft: 15934 corp: 25/1619b lim: 100 exec/s: 53 rss: 74Mb L: 70/99 MS: 1 InsertByte- 00:06:54.507 [2024-11-26 19:33:29.630093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.630121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.630175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629598954773 len:5562 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.630190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.630244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.630260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.630312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.630328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.507 #54 NEW cov: 12577 ft: 15958 corp: 26/1709b lim: 100 exec/s: 54 rss: 74Mb L: 90/99 MS: 1 ChangeBinInt- 00:06:54.507 [2024-11-26 19:33:29.690119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:4374 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.690146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.690187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.690203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.690254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.690270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.507 #55 NEW cov: 12577 ft: 15977 corp: 27/1778b lim: 100 exec/s: 55 rss: 74Mb L: 69/99 MS: 1 ShuffleBytes- 00:06:54.507 [2024-11-26 19:33:29.730031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:47382 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.730059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.730098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.730113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.507 #56 NEW cov: 12577 ft: 15999 corp: 28/1829b lim: 100 exec/s: 56 rss: 74Mb L: 51/99 MS: 1 EraseBytes- 00:06:54.507 [2024-11-26 19:33:29.770489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.770516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.770564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.770580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.770636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743614148050943 len:38037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.770652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.507 [2024-11-26 19:33:29.770715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.507 [2024-11-26 19:33:29.770730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.507 #57 NEW cov: 12577 ft: 16049 corp: 29/1914b lim: 100 exec/s: 57 rss: 74Mb L: 85/99 MS: 1 ChangeBit- 00:06:54.766 [2024-11-26 19:33:29.830612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.830641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.830693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.830709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.830760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:11429747308416114334 len:40607 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.830793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.830853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.830869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.767 #58 NEW cov: 12577 ft: 16057 corp: 30/2009b lim: 100 exec/s: 58 rss: 75Mb L: 95/99 MS: 1 InsertRepeatedBytes- 00:06:54.767 [2024-11-26 19:33:29.890620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.890662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.890715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.890739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.890806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.890821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.767 #59 NEW cov: 12577 ft: 16068 corp: 31/2086b lim: 100 exec/s: 59 rss: 75Mb L: 77/99 MS: 1 ShuffleBytes- 00:06:54.767 [2024-11-26 19:33:29.930894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.930922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.930975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4294966787 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.930991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.931043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.931060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.931114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.931129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.767 #60 NEW cov: 12577 ft: 16159 corp: 32/2171b lim: 100 exec/s: 60 rss: 75Mb L: 85/99 MS: 1 CMP- DE: "\376\003\000\000\000\000\000\000"- 00:06:54.767 [2024-11-26 19:33:29.970945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599606037 len:4374 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.970972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.971019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.971035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:29.971089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:29.971108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.767 #61 NEW cov: 12577 ft: 16161 corp: 33/2240b lim: 100 exec/s: 61 rss: 75Mb L: 69/99 MS: 1 ChangeByte- 00:06:54.767 [2024-11-26 19:33:30.011095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.011122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:30.011171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1519143629599665386 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.011187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:30.011242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.011259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.767 #62 NEW cov: 12577 ft: 16222 corp: 34/2309b lim: 100 exec/s: 62 rss: 75Mb L: 69/99 MS: 1 CopyPart- 00:06:54.767 [2024-11-26 19:33:30.071245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.071278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:30.071315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.071332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.767 [2024-11-26 19:33:30.071384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65378 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.767 [2024-11-26 19:33:30.071402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.027 #63 NEW cov: 12577 ft: 16255 corp: 35/2387b lim: 100 exec/s: 31 rss: 75Mb L: 78/99 MS: 1 InsertByte- 00:06:55.027 #63 DONE cov: 12577 ft: 16255 corp: 35/2387b lim: 100 exec/s: 31 rss: 75Mb 00:06:55.027 ###### Recommended dictionary. ###### 00:06:55.027 "\376\003\000\000\000\000\000\000" # Uses: 0 00:06:55.027 ###### End of recommended dictionary. ###### 00:06:55.027 Done 63 runs in 2 second(s) 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:06:55.027 00:06:55.027 real 1m5.625s 00:06:55.027 user 1m40.328s 00:06:55.027 sys 0m8.911s 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.027 19:33:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:55.027 ************************************ 00:06:55.027 END TEST nvmf_llvm_fuzz 00:06:55.027 ************************************ 00:06:55.027 19:33:30 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.027 19:33:30 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:55.027 19:33:30 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:55.027 19:33:30 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.027 19:33:30 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.027 19:33:30 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:55.027 ************************************ 00:06:55.027 START TEST vfio_llvm_fuzz 00:06:55.027 ************************************ 00:06:55.027 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:55.288 * Looking for test storage... 00:06:55.288 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.288 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.289 --rc genhtml_branch_coverage=1 00:06:55.289 --rc genhtml_function_coverage=1 00:06:55.289 --rc genhtml_legend=1 00:06:55.289 --rc geninfo_all_blocks=1 00:06:55.289 --rc geninfo_unexecuted_blocks=1 00:06:55.289 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.289 ' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.289 --rc genhtml_branch_coverage=1 00:06:55.289 --rc genhtml_function_coverage=1 00:06:55.289 --rc genhtml_legend=1 00:06:55.289 --rc geninfo_all_blocks=1 00:06:55.289 --rc geninfo_unexecuted_blocks=1 00:06:55.289 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.289 ' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.289 --rc genhtml_branch_coverage=1 00:06:55.289 --rc genhtml_function_coverage=1 00:06:55.289 --rc genhtml_legend=1 00:06:55.289 --rc geninfo_all_blocks=1 00:06:55.289 --rc geninfo_unexecuted_blocks=1 00:06:55.289 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.289 ' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.289 --rc genhtml_branch_coverage=1 00:06:55.289 --rc genhtml_function_coverage=1 00:06:55.289 --rc genhtml_legend=1 00:06:55.289 --rc geninfo_all_blocks=1 00:06:55.289 --rc geninfo_unexecuted_blocks=1 00:06:55.289 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.289 ' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:55.289 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:55.290 #define SPDK_CONFIG_H 00:06:55.290 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:55.290 #define SPDK_CONFIG_APPS 1 00:06:55.290 #define SPDK_CONFIG_ARCH native 00:06:55.290 #undef SPDK_CONFIG_ASAN 00:06:55.290 #undef SPDK_CONFIG_AVAHI 00:06:55.290 #undef SPDK_CONFIG_CET 00:06:55.290 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:55.290 #define SPDK_CONFIG_COVERAGE 1 00:06:55.290 #define SPDK_CONFIG_CROSS_PREFIX 00:06:55.290 #undef SPDK_CONFIG_CRYPTO 00:06:55.290 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:55.290 #undef SPDK_CONFIG_CUSTOMOCF 00:06:55.290 #undef SPDK_CONFIG_DAOS 00:06:55.290 #define SPDK_CONFIG_DAOS_DIR 00:06:55.290 #define SPDK_CONFIG_DEBUG 1 00:06:55.290 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:55.290 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.290 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:55.290 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:55.290 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:55.290 #undef SPDK_CONFIG_DPDK_UADK 00:06:55.290 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.290 #define SPDK_CONFIG_EXAMPLES 1 00:06:55.290 #undef SPDK_CONFIG_FC 00:06:55.290 #define SPDK_CONFIG_FC_PATH 00:06:55.290 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:55.290 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:55.290 #define SPDK_CONFIG_FSDEV 1 00:06:55.290 #undef SPDK_CONFIG_FUSE 00:06:55.290 #define SPDK_CONFIG_FUZZER 1 00:06:55.290 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:55.290 #undef SPDK_CONFIG_GOLANG 00:06:55.290 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:55.290 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:55.290 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:55.290 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:55.290 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:55.290 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:55.290 #undef SPDK_CONFIG_HAVE_LZ4 00:06:55.290 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:55.290 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:55.290 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:55.290 #define SPDK_CONFIG_IDXD 1 00:06:55.290 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:55.290 #undef SPDK_CONFIG_IPSEC_MB 00:06:55.290 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:55.290 #define SPDK_CONFIG_ISAL 1 00:06:55.290 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:55.290 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:55.290 #define SPDK_CONFIG_LIBDIR 00:06:55.290 #undef SPDK_CONFIG_LTO 00:06:55.290 #define SPDK_CONFIG_MAX_LCORES 128 00:06:55.290 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:55.290 #define SPDK_CONFIG_NVME_CUSE 1 00:06:55.290 #undef SPDK_CONFIG_OCF 00:06:55.290 #define SPDK_CONFIG_OCF_PATH 00:06:55.290 #define SPDK_CONFIG_OPENSSL_PATH 00:06:55.290 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:55.290 #define SPDK_CONFIG_PGO_DIR 00:06:55.290 #undef SPDK_CONFIG_PGO_USE 00:06:55.290 #define SPDK_CONFIG_PREFIX /usr/local 00:06:55.290 #undef SPDK_CONFIG_RAID5F 00:06:55.290 #undef SPDK_CONFIG_RBD 00:06:55.290 #define SPDK_CONFIG_RDMA 1 00:06:55.290 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:55.290 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:55.290 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:55.290 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:55.290 #undef SPDK_CONFIG_SHARED 00:06:55.290 #undef SPDK_CONFIG_SMA 00:06:55.290 #define SPDK_CONFIG_TESTS 1 00:06:55.290 #undef SPDK_CONFIG_TSAN 00:06:55.290 #define SPDK_CONFIG_UBLK 1 00:06:55.290 #define SPDK_CONFIG_UBSAN 1 00:06:55.290 #undef SPDK_CONFIG_UNIT_TESTS 00:06:55.290 #undef SPDK_CONFIG_URING 00:06:55.290 #define SPDK_CONFIG_URING_PATH 00:06:55.290 #undef SPDK_CONFIG_URING_ZNS 00:06:55.290 #undef SPDK_CONFIG_USDT 00:06:55.290 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:55.290 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:55.290 #define SPDK_CONFIG_VFIO_USER 1 00:06:55.290 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:55.290 #define SPDK_CONFIG_VHOST 1 00:06:55.290 #define SPDK_CONFIG_VIRTIO 1 00:06:55.290 #undef SPDK_CONFIG_VTUNE 00:06:55.290 #define SPDK_CONFIG_VTUNE_DIR 00:06:55.290 #define SPDK_CONFIG_WERROR 1 00:06:55.290 #define SPDK_CONFIG_WPDK_DIR 00:06:55.290 #undef SPDK_CONFIG_XNVME 00:06:55.290 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:55.290 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:55.291 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:55.292 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1130510 ]] 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1130510 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.5cR17k 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.5cR17k/tests/vfio /tmp/spdk.5cR17k 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=52909035520 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8821571584 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.552 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863781888 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1523712 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:55.553 * Looking for test storage... 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=52909035520 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=11036164096 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.553 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.553 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.554 --rc genhtml_branch_coverage=1 00:06:55.554 --rc genhtml_function_coverage=1 00:06:55.554 --rc genhtml_legend=1 00:06:55.554 --rc geninfo_all_blocks=1 00:06:55.554 --rc geninfo_unexecuted_blocks=1 00:06:55.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.554 ' 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.554 --rc genhtml_branch_coverage=1 00:06:55.554 --rc genhtml_function_coverage=1 00:06:55.554 --rc genhtml_legend=1 00:06:55.554 --rc geninfo_all_blocks=1 00:06:55.554 --rc geninfo_unexecuted_blocks=1 00:06:55.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.554 ' 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.554 --rc genhtml_branch_coverage=1 00:06:55.554 --rc genhtml_function_coverage=1 00:06:55.554 --rc genhtml_legend=1 00:06:55.554 --rc geninfo_all_blocks=1 00:06:55.554 --rc geninfo_unexecuted_blocks=1 00:06:55.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.554 ' 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.554 --rc genhtml_branch_coverage=1 00:06:55.554 --rc genhtml_function_coverage=1 00:06:55.554 --rc genhtml_legend=1 00:06:55.554 --rc geninfo_all_blocks=1 00:06:55.554 --rc geninfo_unexecuted_blocks=1 00:06:55.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.554 ' 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:06:55.554 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:55.554 19:33:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:06:55.554 [2024-11-26 19:33:30.800267] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:55.554 [2024-11-26 19:33:30.800341] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130672 ] 00:06:55.813 [2024-11-26 19:33:30.881945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.813 [2024-11-26 19:33:30.924565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.813 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.813 INFO: Seed: 4258487490 00:06:56.072 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:06:56.072 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:06:56.072 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:56.072 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.072 #2 INITED exec/s: 0 rss: 67Mb 00:06:56.072 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.072 This may also happen if the target rejected all inputs we tried so far 00:06:56.072 [2024-11-26 19:33:31.165407] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:06:56.331 NEW_FUNC[1/675]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:06:56.331 NEW_FUNC[2/675]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:56.331 #11 NEW cov: 11196 ft: 11161 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 InsertRepeatedBytes-ChangeBinInt-CrossOver-CopyPart- 00:06:56.590 NEW_FUNC[1/1]: 0x198e898 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1190 00:06:56.590 #12 NEW cov: 11213 ft: 14476 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 CMP- DE: "\365\377\377\377"- 00:06:56.849 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:56.849 #27 NEW cov: 11230 ft: 15862 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 5 CrossOver-PersAutoDict-ChangeBit-CopyPart-CopyPart- DE: "\365\377\377\377"- 00:06:56.849 #28 NEW cov: 11230 ft: 16146 corp: 5/25b lim: 6 exec/s: 28 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:06:57.108 #29 NEW cov: 11230 ft: 16859 corp: 6/31b lim: 6 exec/s: 29 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:06:57.367 #30 NEW cov: 11230 ft: 17901 corp: 7/37b lim: 6 exec/s: 30 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:06:57.627 #36 NEW cov: 11230 ft: 18145 corp: 8/43b lim: 6 exec/s: 36 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:06:57.627 #37 NEW cov: 11230 ft: 18421 corp: 9/49b lim: 6 exec/s: 37 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:06:57.886 #41 NEW cov: 11237 ft: 18624 corp: 10/55b lim: 6 exec/s: 41 rss: 77Mb L: 6/6 MS: 4 EraseBytes-CopyPart-PersAutoDict-CrossOver- DE: "\365\377\377\377"- 00:06:58.145 #52 NEW cov: 11237 ft: 18906 corp: 11/61b lim: 6 exec/s: 26 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:06:58.145 #52 DONE cov: 11237 ft: 18906 corp: 11/61b lim: 6 exec/s: 26 rss: 77Mb 00:06:58.145 ###### Recommended dictionary. ###### 00:06:58.145 "\365\377\377\377" # Uses: 3 00:06:58.145 ###### End of recommended dictionary. ###### 00:06:58.145 Done 52 runs in 2 second(s) 00:06:58.145 [2024-11-26 19:33:33.259817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:06:58.405 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:58.405 19:33:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:06:58.405 [2024-11-26 19:33:33.527529] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:06:58.405 [2024-11-26 19:33:33.527617] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131114 ] 00:06:58.405 [2024-11-26 19:33:33.609210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.405 [2024-11-26 19:33:33.649697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.665 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.665 INFO: Seed: 2694528083 00:06:58.665 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:06:58.665 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:06:58.665 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:58.665 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.665 #2 INITED exec/s: 0 rss: 67Mb 00:06:58.665 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.665 This may also happen if the target rejected all inputs we tried so far 00:06:58.665 [2024-11-26 19:33:33.903605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:06:58.665 [2024-11-26 19:33:33.939980] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:58.665 [2024-11-26 19:33:33.940008] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:58.665 [2024-11-26 19:33:33.940027] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.183 NEW_FUNC[1/678]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:06:59.183 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:59.183 #7 NEW cov: 11188 ft: 11163 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 CrossOver-CopyPart-CrossOver-CopyPart-CopyPart- 00:06:59.183 [2024-11-26 19:33:34.402934] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:59.183 [2024-11-26 19:33:34.402969] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:59.183 [2024-11-26 19:33:34.402987] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.442 #8 NEW cov: 11205 ft: 14271 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:06:59.443 [2024-11-26 19:33:34.582312] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:59.443 [2024-11-26 19:33:34.582337] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:59.443 [2024-11-26 19:33:34.582355] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.443 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:59.443 #19 NEW cov: 11222 ft: 15569 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:59.702 [2024-11-26 19:33:34.758462] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:59.702 [2024-11-26 19:33:34.758486] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:59.702 [2024-11-26 19:33:34.758503] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.702 #25 NEW cov: 11222 ft: 16093 corp: 5/17b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:06:59.702 [2024-11-26 19:33:34.934241] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:59.702 [2024-11-26 19:33:34.934265] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:59.702 [2024-11-26 19:33:34.934283] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.961 #26 NEW cov: 11222 ft: 16198 corp: 6/21b lim: 4 exec/s: 26 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:06:59.961 [2024-11-26 19:33:35.111143] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:59.961 [2024-11-26 19:33:35.111172] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:59.961 [2024-11-26 19:33:35.111190] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:59.961 #27 NEW cov: 11222 ft: 16473 corp: 7/25b lim: 4 exec/s: 27 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:00.221 [2024-11-26 19:33:35.288532] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:00.221 [2024-11-26 19:33:35.288556] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:00.221 [2024-11-26 19:33:35.288573] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:00.221 #30 NEW cov: 11222 ft: 17360 corp: 8/29b lim: 4 exec/s: 30 rss: 76Mb L: 4/4 MS: 3 ShuffleBytes-CrossOver-CopyPart- 00:07:00.221 [2024-11-26 19:33:35.491217] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:00.221 [2024-11-26 19:33:35.491241] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:00.221 [2024-11-26 19:33:35.491259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:00.480 #31 NEW cov: 11222 ft: 17779 corp: 9/33b lim: 4 exec/s: 31 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:07:00.480 [2024-11-26 19:33:35.679832] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:00.480 [2024-11-26 19:33:35.679857] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:00.480 [2024-11-26 19:33:35.679874] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:00.480 #32 NEW cov: 11229 ft: 17916 corp: 10/37b lim: 4 exec/s: 32 rss: 77Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:00.740 [2024-11-26 19:33:35.859033] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:00.740 [2024-11-26 19:33:35.859056] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:00.740 [2024-11-26 19:33:35.859073] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:00.740 #33 NEW cov: 11229 ft: 18008 corp: 11/41b lim: 4 exec/s: 16 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:07:00.740 #33 DONE cov: 11229 ft: 18008 corp: 11/41b lim: 4 exec/s: 16 rss: 77Mb 00:07:00.740 Done 33 runs in 2 second(s) 00:07:00.740 [2024-11-26 19:33:35.985796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:00.999 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:00.999 19:33:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:00.999 [2024-11-26 19:33:36.240644] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:07:00.999 [2024-11-26 19:33:36.240712] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131501 ] 00:07:01.258 [2024-11-26 19:33:36.319948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.258 [2024-11-26 19:33:36.359962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.258 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.258 INFO: Seed: 1104628354 00:07:01.517 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:01.517 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:01.517 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:01.517 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.517 #2 INITED exec/s: 0 rss: 67Mb 00:07:01.517 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.517 This may also happen if the target rejected all inputs we tried so far 00:07:01.517 [2024-11-26 19:33:36.603100] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:01.517 [2024-11-26 19:33:36.655708] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:01.776 NEW_FUNC[1/677]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:01.776 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:01.776 #11 NEW cov: 11178 ft: 11143 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 InsertByte-InsertRepeatedBytes-ShuffleBytes-InsertByte- 00:07:02.035 [2024-11-26 19:33:37.106565] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:02.035 #12 NEW cov: 11192 ft: 14576 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:02.035 [2024-11-26 19:33:37.287428] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:02.295 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:02.295 #13 NEW cov: 11209 ft: 15250 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:02.295 [2024-11-26 19:33:37.468043] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:02.295 #14 NEW cov: 11209 ft: 16440 corp: 5/33b lim: 8 exec/s: 14 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:07:02.554 [2024-11-26 19:33:37.664146] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:02.554 #20 NEW cov: 11209 ft: 16709 corp: 6/41b lim: 8 exec/s: 20 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:02.554 [2024-11-26 19:33:37.854986] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:02.813 #26 NEW cov: 11209 ft: 17152 corp: 7/49b lim: 8 exec/s: 26 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:02.813 [2024-11-26 19:33:38.035532] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:03.072 #27 NEW cov: 11209 ft: 17216 corp: 8/57b lim: 8 exec/s: 27 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:03.072 [2024-11-26 19:33:38.222363] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:03.072 #28 NEW cov: 11209 ft: 17300 corp: 9/65b lim: 8 exec/s: 28 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:03.331 [2024-11-26 19:33:38.415621] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:03.331 #39 NEW cov: 11216 ft: 17415 corp: 10/73b lim: 8 exec/s: 39 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:03.331 [2024-11-26 19:33:38.600870] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:03.591 #40 NEW cov: 11216 ft: 17424 corp: 11/81b lim: 8 exec/s: 20 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:03.591 #40 DONE cov: 11216 ft: 17424 corp: 11/81b lim: 8 exec/s: 20 rss: 76Mb 00:07:03.591 Done 40 runs in 2 second(s) 00:07:03.591 [2024-11-26 19:33:38.731825] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:03.851 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:03.851 19:33:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:03.851 [2024-11-26 19:33:38.997813] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:07:03.851 [2024-11-26 19:33:38.997898] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132037 ] 00:07:03.851 [2024-11-26 19:33:39.076933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.851 [2024-11-26 19:33:39.116156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.110 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.110 INFO: Seed: 3856576297 00:07:04.110 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:04.110 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:04.110 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:04.110 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.110 #2 INITED exec/s: 0 rss: 68Mb 00:07:04.110 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.110 This may also happen if the target rejected all inputs we tried so far 00:07:04.110 [2024-11-26 19:33:39.354140] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:04.650 NEW_FUNC[1/670]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:04.650 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:04.650 #145 NEW cov: 11144 ft: 11140 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 3 ChangeBit-InsertRepeatedBytes-InsertByte- 00:07:04.650 NEW_FUNC[1/3]: 0x15d4768 in vfio_user_map_cmd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1690 00:07:04.650 NEW_FUNC[2/3]: 0x15d49d8 in nvme_map_cmd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:978 00:07:04.650 #146 NEW cov: 11189 ft: 14015 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:07:04.909 #147 NEW cov: 11189 ft: 14413 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:04.909 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:04.909 #148 NEW cov: 11206 ft: 16424 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:05.169 #149 NEW cov: 11206 ft: 16869 corp: 6/161b lim: 32 exec/s: 149 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:05.429 #150 NEW cov: 11206 ft: 16982 corp: 7/193b lim: 32 exec/s: 150 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:05.688 #151 NEW cov: 11206 ft: 17002 corp: 8/225b lim: 32 exec/s: 151 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:05.688 #157 NEW cov: 11206 ft: 17070 corp: 9/257b lim: 32 exec/s: 157 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:05.948 #158 NEW cov: 11213 ft: 17490 corp: 10/289b lim: 32 exec/s: 158 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:06.208 #159 NEW cov: 11213 ft: 17644 corp: 11/321b lim: 32 exec/s: 79 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:06.208 #159 DONE cov: 11213 ft: 17644 corp: 11/321b lim: 32 exec/s: 79 rss: 76Mb 00:07:06.208 Done 159 runs in 2 second(s) 00:07:06.208 [2024-11-26 19:33:41.370823] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:06.467 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:06.467 19:33:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:06.467 [2024-11-26 19:33:41.638695] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:07:06.467 [2024-11-26 19:33:41.638765] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132568 ] 00:07:06.467 [2024-11-26 19:33:41.718910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.467 [2024-11-26 19:33:41.758442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.727 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.727 INFO: Seed: 2207608872 00:07:06.727 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:06.727 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:06.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:06.727 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.727 #2 INITED exec/s: 0 rss: 66Mb 00:07:06.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.727 This may also happen if the target rejected all inputs we tried so far 00:07:06.727 [2024-11-26 19:33:41.999644] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:07.246 NEW_FUNC[1/677]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:07.246 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:07.246 #30 NEW cov: 11184 ft: 11098 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 3 ShuffleBytes-InsertRepeatedBytes-InsertByte- 00:07:07.504 #36 NEW cov: 11198 ft: 14065 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:07.504 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:07.504 #42 NEW cov: 11215 ft: 14991 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:07.763 #58 NEW cov: 11215 ft: 15347 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:08.022 #59 NEW cov: 11215 ft: 15771 corp: 6/161b lim: 32 exec/s: 59 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:08.022 #60 NEW cov: 11215 ft: 15992 corp: 7/193b lim: 32 exec/s: 60 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:08.280 #61 NEW cov: 11215 ft: 16067 corp: 8/225b lim: 32 exec/s: 61 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:08.539 #62 NEW cov: 11215 ft: 16730 corp: 9/257b lim: 32 exec/s: 62 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:08.539 #63 NEW cov: 11222 ft: 16761 corp: 10/289b lim: 32 exec/s: 63 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:08.799 #64 pulse cov: 11222 ft: 16803 corp: 10/289b lim: 32 exec/s: 32 rss: 77Mb 00:07:08.799 #64 NEW cov: 11222 ft: 16803 corp: 11/321b lim: 32 exec/s: 32 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:08.799 #64 DONE cov: 11222 ft: 16803 corp: 11/321b lim: 32 exec/s: 32 rss: 77Mb 00:07:08.799 Done 64 runs in 2 second(s) 00:07:08.799 [2024-11-26 19:33:44.001810] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:09.059 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:09.059 19:33:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:09.059 [2024-11-26 19:33:44.270021] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:07:09.059 [2024-11-26 19:33:44.270092] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132993 ] 00:07:09.059 [2024-11-26 19:33:44.351727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.318 [2024-11-26 19:33:44.392995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.318 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.318 INFO: Seed: 546631526 00:07:09.318 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:09.318 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:09.318 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:09.318 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.318 #2 INITED exec/s: 0 rss: 67Mb 00:07:09.318 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.318 This may also happen if the target rejected all inputs we tried so far 00:07:09.577 [2024-11-26 19:33:44.634520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:09.577 [2024-11-26 19:33:44.686683] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:09.577 [2024-11-26 19:33:44.686721] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:09.837 NEW_FUNC[1/678]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:09.837 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:09.837 #41 NEW cov: 11194 ft: 11159 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 InsertRepeatedBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:10.097 [2024-11-26 19:33:45.147371] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.097 [2024-11-26 19:33:45.147417] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.097 #42 NEW cov: 11208 ft: 14374 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:10.097 [2024-11-26 19:33:45.320935] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.097 [2024-11-26 19:33:45.320966] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.356 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:10.356 #43 NEW cov: 11228 ft: 15539 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:10.356 [2024-11-26 19:33:45.502193] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.356 [2024-11-26 19:33:45.502225] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.356 #49 NEW cov: 11228 ft: 16293 corp: 5/53b lim: 13 exec/s: 49 rss: 76Mb L: 13/13 MS: 1 CMP- DE: "\001\000\000\000\000\000\307P"- 00:07:10.616 [2024-11-26 19:33:45.683585] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.616 [2024-11-26 19:33:45.683625] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.616 #55 NEW cov: 11228 ft: 16551 corp: 6/66b lim: 13 exec/s: 55 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:10.616 [2024-11-26 19:33:45.865525] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.616 [2024-11-26 19:33:45.865556] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.876 #56 NEW cov: 11228 ft: 16927 corp: 7/79b lim: 13 exec/s: 56 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:10.876 [2024-11-26 19:33:46.044416] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:10.876 [2024-11-26 19:33:46.044448] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:10.876 #57 NEW cov: 11228 ft: 17353 corp: 8/92b lim: 13 exec/s: 57 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:11.136 [2024-11-26 19:33:46.212447] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:11.136 [2024-11-26 19:33:46.212478] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:11.136 #58 NEW cov: 11228 ft: 17622 corp: 9/105b lim: 13 exec/s: 58 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:11.136 [2024-11-26 19:33:46.398915] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:11.136 [2024-11-26 19:33:46.398946] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:11.395 #59 NEW cov: 11235 ft: 18175 corp: 10/118b lim: 13 exec/s: 59 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:11.395 [2024-11-26 19:33:46.579422] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:11.395 [2024-11-26 19:33:46.579454] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:11.395 #60 NEW cov: 11235 ft: 18321 corp: 11/131b lim: 13 exec/s: 30 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:11.395 #60 DONE cov: 11235 ft: 18321 corp: 11/131b lim: 13 exec/s: 30 rss: 76Mb 00:07:11.395 ###### Recommended dictionary. ###### 00:07:11.395 "\001\000\000\000\000\000\307P" # Uses: 1 00:07:11.395 ###### End of recommended dictionary. ###### 00:07:11.395 Done 60 runs in 2 second(s) 00:07:11.668 [2024-11-26 19:33:46.707805] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:11.668 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:11.668 19:33:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:12.046 [2024-11-26 19:33:46.976079] Starting SPDK v25.01-pre git sha1 f5304d661 / DPDK 24.03.0 initialization... 00:07:12.046 [2024-11-26 19:33:46.976167] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133397 ] 00:07:12.046 [2024-11-26 19:33:47.057727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.046 [2024-11-26 19:33:47.097797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.046 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.046 INFO: Seed: 3250653694 00:07:12.046 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:12.046 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:12.046 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:12.046 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.046 #2 INITED exec/s: 0 rss: 67Mb 00:07:12.046 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.046 This may also happen if the target rejected all inputs we tried so far 00:07:12.305 [2024-11-26 19:33:47.338281] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:12.305 [2024-11-26 19:33:47.381664] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:12.305 [2024-11-26 19:33:47.381699] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:12.564 NEW_FUNC[1/678]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:12.564 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:12.564 #144 NEW cov: 11181 ft: 11152 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:12.564 [2024-11-26 19:33:47.859050] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:12.564 [2024-11-26 19:33:47.859098] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:12.823 #154 NEW cov: 11196 ft: 14545 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 5 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes-CopyPart-CopyPart- 00:07:12.823 [2024-11-26 19:33:48.049133] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:12.823 [2024-11-26 19:33:48.049165] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.082 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:13.082 #157 NEW cov: 11213 ft: 15573 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 3 CrossOver-CopyPart-InsertByte- 00:07:13.082 [2024-11-26 19:33:48.227871] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.082 [2024-11-26 19:33:48.227903] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.082 #158 NEW cov: 11213 ft: 15940 corp: 5/37b lim: 9 exec/s: 158 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:13.341 [2024-11-26 19:33:48.412005] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.341 [2024-11-26 19:33:48.412037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.341 #159 NEW cov: 11213 ft: 16581 corp: 6/46b lim: 9 exec/s: 159 rss: 78Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:13.341 [2024-11-26 19:33:48.593743] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.341 [2024-11-26 19:33:48.593774] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.600 #164 NEW cov: 11216 ft: 16817 corp: 7/55b lim: 9 exec/s: 164 rss: 78Mb L: 9/9 MS: 5 ChangeBit-CrossOver-ChangeByte-ChangeByte-CrossOver- 00:07:13.600 [2024-11-26 19:33:48.775679] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.600 [2024-11-26 19:33:48.775712] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.600 #170 NEW cov: 11216 ft: 16914 corp: 8/64b lim: 9 exec/s: 170 rss: 78Mb L: 9/9 MS: 1 CrossOver- 00:07:13.859 [2024-11-26 19:33:48.953430] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.859 [2024-11-26 19:33:48.953463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.859 #171 NEW cov: 11216 ft: 17340 corp: 9/73b lim: 9 exec/s: 171 rss: 78Mb L: 9/9 MS: 1 CrossOver- 00:07:13.859 [2024-11-26 19:33:49.126802] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.859 [2024-11-26 19:33:49.126833] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:14.118 #172 NEW cov: 11223 ft: 17510 corp: 10/82b lim: 9 exec/s: 172 rss: 78Mb L: 9/9 MS: 1 ChangeByte- 00:07:14.118 [2024-11-26 19:33:49.303419] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:14.118 [2024-11-26 19:33:49.303448] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:14.118 #176 NEW cov: 11223 ft: 17587 corp: 11/91b lim: 9 exec/s: 88 rss: 78Mb L: 9/9 MS: 4 EraseBytes-CrossOver-ChangeByte-CopyPart- 00:07:14.118 #176 DONE cov: 11223 ft: 17587 corp: 11/91b lim: 9 exec/s: 88 rss: 78Mb 00:07:14.118 Done 176 runs in 2 second(s) 00:07:14.118 [2024-11-26 19:33:49.422802] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:14.377 00:07:14.377 real 0m19.343s 00:07:14.377 user 0m27.305s 00:07:14.377 sys 0m1.833s 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.377 19:33:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 ************************************ 00:07:14.377 END TEST vfio_llvm_fuzz 00:07:14.377 ************************************ 00:07:14.637 00:07:14.637 real 1m25.335s 00:07:14.637 user 2m7.795s 00:07:14.637 sys 0m10.975s 00:07:14.637 19:33:49 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.637 19:33:49 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:14.637 ************************************ 00:07:14.637 END TEST llvm_fuzz 00:07:14.637 ************************************ 00:07:14.637 19:33:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:07:14.637 19:33:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:07:14.637 19:33:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:07:14.637 19:33:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.637 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:07:14.637 19:33:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:07:14.637 19:33:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:07:14.637 19:33:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:07:14.637 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:07:21.211 INFO: APP EXITING 00:07:21.211 INFO: killing all VMs 00:07:21.211 INFO: killing vhost app 00:07:21.211 INFO: EXIT DONE 00:07:24.503 Waiting for block devices as requested 00:07:24.503 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:24.503 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:24.503 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:24.503 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:24.503 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:24.503 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:24.762 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:24.762 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:24.762 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:25.023 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:25.023 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:25.023 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:25.282 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:25.282 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:25.282 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:25.542 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:25.542 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:28.832 Cleaning 00:07:28.832 Removing: /dev/shm/spdk_tgt_trace.pid1105158 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1102686 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1103866 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1105158 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1105618 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1106697 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1106721 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1107828 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1107840 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1108275 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1108603 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1108926 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1109235 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1109357 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1109627 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1109907 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1110234 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1110977 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1113994 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1114285 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1114573 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1114584 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1115148 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1115159 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1115733 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1115743 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1116073 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1116265 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1116352 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1116532 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1116978 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1117258 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1117539 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1117665 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1118370 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1118840 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1119194 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1119730 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1120263 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1120585 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1121094 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1121629 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1122038 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1122453 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1122983 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1123439 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1123804 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1124344 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1124980 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1125495 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1126262 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1126792 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1127264 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1127621 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1128159 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1128666 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1128986 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1129514 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1130047 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1130672 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1131114 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1131501 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1132037 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1132568 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1132993 00:07:28.832 Removing: /var/run/dpdk/spdk_pid1133397 00:07:28.832 Clean 00:07:28.832 19:34:04 -- common/autotest_common.sh@1453 -- # return 0 00:07:28.832 19:34:04 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:07:28.832 19:34:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.832 19:34:04 -- common/autotest_common.sh@10 -- # set +x 00:07:28.832 19:34:04 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:07:28.832 19:34:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.832 19:34:04 -- common/autotest_common.sh@10 -- # set +x 00:07:28.832 19:34:04 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:28.832 19:34:04 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:28.832 19:34:04 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:28.832 19:34:04 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:07:28.832 19:34:04 -- spdk/autotest.sh@398 -- # hostname 00:07:28.832 19:34:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:07:29.091 geninfo: WARNING: invalid characters removed from testname! 00:07:31.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:07:34.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:07:37.503 19:34:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:45.623 19:34:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:50.896 19:34:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:56.189 19:34:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:01.467 19:34:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:06.741 19:34:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:12.017 19:34:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:12.017 19:34:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:08:12.017 19:34:46 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:08:12.017 19:34:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:12.017 19:34:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:12.017 19:34:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:12.017 + [[ -n 993217 ]] 00:08:12.017 + sudo kill 993217 00:08:12.027 [Pipeline] } 00:08:12.043 [Pipeline] // stage 00:08:12.048 [Pipeline] } 00:08:12.063 [Pipeline] // timeout 00:08:12.069 [Pipeline] } 00:08:12.083 [Pipeline] // catchError 00:08:12.088 [Pipeline] } 00:08:12.103 [Pipeline] // wrap 00:08:12.110 [Pipeline] } 00:08:12.123 [Pipeline] // catchError 00:08:12.132 [Pipeline] stage 00:08:12.134 [Pipeline] { (Epilogue) 00:08:12.148 [Pipeline] catchError 00:08:12.150 [Pipeline] { 00:08:12.164 [Pipeline] echo 00:08:12.167 Cleanup processes 00:08:12.174 [Pipeline] sh 00:08:12.459 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:12.459 1141804 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:12.473 [Pipeline] sh 00:08:12.757 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:12.758 ++ grep -v 'sudo pgrep' 00:08:12.758 ++ awk '{print $1}' 00:08:12.758 + sudo kill -9 00:08:12.758 + true 00:08:12.770 [Pipeline] sh 00:08:13.055 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:13.055 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:13.055 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:14.436 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:24.514 [Pipeline] sh 00:08:24.795 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:24.795 Artifacts sizes are good 00:08:24.808 [Pipeline] archiveArtifacts 00:08:24.815 Archiving artifacts 00:08:24.965 [Pipeline] sh 00:08:25.240 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:25.254 [Pipeline] cleanWs 00:08:25.261 [WS-CLEANUP] Deleting project workspace... 00:08:25.261 [WS-CLEANUP] Deferred wipeout is used... 00:08:25.273 [WS-CLEANUP] done 00:08:25.275 [Pipeline] } 00:08:25.294 [Pipeline] // catchError 00:08:25.305 [Pipeline] sh 00:08:25.583 + logger -p user.info -t JENKINS-CI 00:08:25.591 [Pipeline] } 00:08:25.603 [Pipeline] // stage 00:08:25.609 [Pipeline] } 00:08:25.622 [Pipeline] // node 00:08:25.627 [Pipeline] End of Pipeline 00:08:25.664 Finished: SUCCESS