00:00:00.001 Started by upstream project "autotest-per-patch" build number 132794 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.141 > git --version # 'git version 2.39.2' 00:00:00.141 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.487 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.498 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.508 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.508 > git config core.sparsecheckout # timeout=10 00:00:04.520 > git read-tree -mu HEAD # timeout=10 00:00:04.535 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.561 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.561 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.681 [Pipeline] Start of Pipeline 00:00:04.692 [Pipeline] library 00:00:04.693 Loading library shm_lib@master 00:00:04.693 Library shm_lib@master is cached. Copying from home. 00:00:04.710 [Pipeline] node 00:00:04.728 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.730 [Pipeline] { 00:00:04.741 [Pipeline] catchError 00:00:04.742 [Pipeline] { 00:00:04.755 [Pipeline] wrap 00:00:04.763 [Pipeline] { 00:00:04.771 [Pipeline] stage 00:00:04.773 [Pipeline] { (Prologue) 00:00:05.041 [Pipeline] sh 00:00:05.428 + logger -p user.info -t JENKINS-CI 00:00:05.476 [Pipeline] echo 00:00:05.478 Node: WFP20 00:00:05.484 [Pipeline] sh 00:00:05.831 [Pipeline] setCustomBuildProperty 00:00:05.842 [Pipeline] echo 00:00:05.844 Cleanup processes 00:00:05.850 [Pipeline] sh 00:00:06.176 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.176 166520 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.269 [Pipeline] sh 00:00:06.675 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.675 ++ grep -v 'sudo pgrep' 00:00:06.675 ++ awk '{print $1}' 00:00:06.675 + sudo kill -9 00:00:06.675 + true 00:00:06.730 [Pipeline] cleanWs 00:00:06.755 [WS-CLEANUP] Deleting project workspace... 00:00:06.755 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.788 [WS-CLEANUP] done 00:00:06.792 [Pipeline] setCustomBuildProperty 00:00:06.804 [Pipeline] sh 00:00:07.095 + sudo git config --global --replace-all safe.directory '*' 00:00:07.213 [Pipeline] httpRequest 00:00:07.596 [Pipeline] echo 00:00:07.597 Sorcerer 10.211.164.112 is alive 00:00:07.603 [Pipeline] retry 00:00:07.604 [Pipeline] { 00:00:07.613 [Pipeline] httpRequest 00:00:07.622 HttpMethod: GET 00:00:07.623 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.625 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.629 Response Code: HTTP/1.1 200 OK 00:00:07.630 Success: Status code 200 is in the accepted range: 200,404 00:00:07.630 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.707 [Pipeline] } 00:00:08.722 [Pipeline] // retry 00:00:08.727 [Pipeline] sh 00:00:09.130 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.160 [Pipeline] httpRequest 00:00:09.628 [Pipeline] echo 00:00:09.630 Sorcerer 10.211.164.112 is alive 00:00:09.641 [Pipeline] retry 00:00:09.644 [Pipeline] { 00:00:09.657 [Pipeline] httpRequest 00:00:09.664 HttpMethod: GET 00:00:09.664 URL: http://10.211.164.112/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:09.672 Sending request to url: http://10.211.164.112/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:09.696 Response Code: HTTP/1.1 200 OK 00:00:09.697 Success: Status code 200 is in the accepted range: 200,404 00:00:09.697 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:54.865 [Pipeline] } 00:00:54.883 [Pipeline] // retry 00:00:54.892 [Pipeline] sh 00:00:55.238 + tar --no-same-owner -xf spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:57.792 [Pipeline] sh 00:00:58.080 + git -C spdk log --oneline -n5 00:00:58.080 496bfd677 env: match legacy mem mode config with DPDK 00:00:58.080 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:58.080 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:58.080 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:58.080 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:58.091 [Pipeline] } 00:00:58.104 [Pipeline] // stage 00:00:58.111 [Pipeline] stage 00:00:58.113 [Pipeline] { (Prepare) 00:00:58.127 [Pipeline] writeFile 00:00:58.140 [Pipeline] sh 00:00:58.425 + logger -p user.info -t JENKINS-CI 00:00:58.439 [Pipeline] sh 00:00:58.731 + logger -p user.info -t JENKINS-CI 00:00:58.743 [Pipeline] sh 00:00:59.029 + cat autorun-spdk.conf 00:00:59.029 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.029 SPDK_TEST_FUZZER_SHORT=1 00:00:59.029 SPDK_TEST_FUZZER=1 00:00:59.029 SPDK_TEST_SETUP=1 00:00:59.029 SPDK_RUN_UBSAN=1 00:00:59.036 RUN_NIGHTLY=0 00:00:59.041 [Pipeline] readFile 00:00:59.061 [Pipeline] withEnv 00:00:59.063 [Pipeline] { 00:00:59.074 [Pipeline] sh 00:00:59.362 + set -ex 00:00:59.362 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:59.362 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:59.362 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.362 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:59.362 ++ SPDK_TEST_FUZZER=1 00:00:59.362 ++ SPDK_TEST_SETUP=1 00:00:59.362 ++ SPDK_RUN_UBSAN=1 00:00:59.362 ++ RUN_NIGHTLY=0 00:00:59.362 + case $SPDK_TEST_NVMF_NICS in 00:00:59.362 + DRIVERS= 00:00:59.362 + [[ -n '' ]] 00:00:59.362 + exit 0 00:00:59.372 [Pipeline] } 00:00:59.386 [Pipeline] // withEnv 00:00:59.391 [Pipeline] } 00:00:59.403 [Pipeline] // stage 00:00:59.411 [Pipeline] catchError 00:00:59.413 [Pipeline] { 00:00:59.425 [Pipeline] timeout 00:00:59.425 Timeout set to expire in 30 min 00:00:59.427 [Pipeline] { 00:00:59.439 [Pipeline] stage 00:00:59.441 [Pipeline] { (Tests) 00:00:59.453 [Pipeline] sh 00:00:59.742 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:59.742 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:59.742 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:59.742 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:59.742 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:59.742 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:59.742 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:59.742 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:59.742 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:59.742 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:59.742 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:59.742 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:59.742 + source /etc/os-release 00:00:59.742 ++ NAME='Fedora Linux' 00:00:59.742 ++ VERSION='39 (Cloud Edition)' 00:00:59.742 ++ ID=fedora 00:00:59.742 ++ VERSION_ID=39 00:00:59.742 ++ VERSION_CODENAME= 00:00:59.742 ++ PLATFORM_ID=platform:f39 00:00:59.742 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:59.742 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.742 ++ LOGO=fedora-logo-icon 00:00:59.742 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:59.742 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.742 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:59.742 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.742 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.742 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.742 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:59.742 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.742 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:59.742 ++ SUPPORT_END=2024-11-12 00:00:59.742 ++ VARIANT='Cloud Edition' 00:00:59.742 ++ VARIANT_ID=cloud 00:00:59.742 + uname -a 00:00:59.742 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:59.742 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:03.048 Hugepages 00:01:03.048 node hugesize free / total 00:01:03.048 node0 1048576kB 0 / 0 00:01:03.048 node0 2048kB 0 / 0 00:01:03.048 node1 1048576kB 0 / 0 00:01:03.048 node1 2048kB 0 / 0 00:01:03.048 00:01:03.048 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.048 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:03.048 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:03.048 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:03.048 + rm -f /tmp/spdk-ld-path 00:01:03.048 + source autorun-spdk.conf 00:01:03.048 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.048 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:03.048 ++ SPDK_TEST_FUZZER=1 00:01:03.048 ++ SPDK_TEST_SETUP=1 00:01:03.048 ++ SPDK_RUN_UBSAN=1 00:01:03.048 ++ RUN_NIGHTLY=0 00:01:03.048 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.048 + [[ -n '' ]] 00:01:03.048 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:03.048 + for M in /var/spdk/build-*-manifest.txt 00:01:03.048 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:03.048 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:03.048 + for M in /var/spdk/build-*-manifest.txt 00:01:03.048 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.048 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:03.048 + for M in /var/spdk/build-*-manifest.txt 00:01:03.048 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.048 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:03.048 ++ uname 00:01:03.048 + [[ Linux == \L\i\n\u\x ]] 00:01:03.048 + sudo dmesg -T 00:01:03.048 + sudo dmesg --clear 00:01:03.048 + dmesg_pid=167454 00:01:03.048 + [[ Fedora Linux == FreeBSD ]] 00:01:03.048 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.048 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.048 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.048 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.048 + sudo dmesg -Tw 00:01:03.048 + export FIO_BIN=/usr/src/fio-static/fio 00:01:03.048 + FIO_BIN=/usr/src/fio-static/fio 00:01:03.048 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.048 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.048 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.048 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.048 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.048 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.048 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.048 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.048 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:03.048 13:08:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:03.048 13:08:05 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:03.048 13:08:05 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:03.048 13:08:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:03.048 13:08:05 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:03.048 13:08:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:03.048 13:08:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:03.048 13:08:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:03.048 13:08:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.048 13:08:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.310 13:08:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.310 13:08:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.310 13:08:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.310 13:08:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.310 13:08:05 -- paths/export.sh@5 -- $ export PATH 00:01:03.310 13:08:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.310 13:08:05 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:03.310 13:08:05 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:03.310 13:08:05 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733746085.XXXXXX 00:01:03.310 13:08:05 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733746085.MryLMj 00:01:03.310 13:08:05 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:03.310 13:08:05 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:03.310 13:08:05 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:03.310 13:08:05 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.310 13:08:05 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.310 13:08:05 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:03.310 13:08:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:03.310 13:08:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.310 13:08:05 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:03.310 13:08:05 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:03.310 13:08:05 -- pm/common@17 -- $ local monitor 00:01:03.310 13:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.310 13:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.310 13:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.310 13:08:05 -- pm/common@21 -- $ date +%s 00:01:03.310 13:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.310 13:08:05 -- pm/common@21 -- $ date +%s 00:01:03.310 13:08:05 -- pm/common@25 -- $ sleep 1 00:01:03.310 13:08:05 -- pm/common@21 -- $ date +%s 00:01:03.310 13:08:05 -- pm/common@21 -- $ date +%s 00:01:03.310 13:08:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733746085 00:01:03.310 13:08:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733746085 00:01:03.310 13:08:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733746085 00:01:03.310 13:08:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733746085 00:01:03.310 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733746085_collect-cpu-load.pm.log 00:01:03.310 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733746085_collect-vmstat.pm.log 00:01:03.310 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733746085_collect-cpu-temp.pm.log 00:01:03.310 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733746085_collect-bmc-pm.bmc.pm.log 00:01:04.255 13:08:06 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:04.255 13:08:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.255 13:08:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.255 13:08:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:04.255 13:08:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.255 Mon Dec 9 12:08:06 PM UTC 2024 00:01:04.255 13:08:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:04.255 v25.01-pre-312-g496bfd677 00:01:04.255 13:08:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:04.255 13:08:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:04.255 13:08:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:04.255 13:08:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:04.255 13:08:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:04.255 13:08:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.255 ************************************ 00:01:04.255 START TEST ubsan 00:01:04.255 ************************************ 00:01:04.255 13:08:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:04.255 using ubsan 00:01:04.255 00:01:04.255 real 0m0.001s 00:01:04.255 user 0m0.000s 00:01:04.255 sys 0m0.001s 00:01:04.255 13:08:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:04.255 13:08:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:04.255 ************************************ 00:01:04.255 END TEST ubsan 00:01:04.255 ************************************ 00:01:04.255 13:08:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:04.255 13:08:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:04.255 13:08:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:04.255 13:08:06 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:04.255 13:08:06 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:04.255 13:08:06 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:04.255 13:08:06 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:04.255 13:08:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:04.255 13:08:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.255 ************************************ 00:01:04.255 START TEST autobuild_llvm_precompile 00:01:04.255 ************************************ 00:01:04.255 13:08:06 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:04.515 Target: x86_64-redhat-linux-gnu 00:01:04.515 Thread model: posix 00:01:04.515 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:04.515 13:08:06 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:04.774 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:04.774 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:05.034 Using 'verbs' RDMA provider 00:01:20.875 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:35.773 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.773 Creating mk/config.mk...done. 00:01:35.773 Creating mk/cc.flags.mk...done. 00:01:35.773 Type 'make' to build. 00:01:35.773 00:01:35.773 real 0m30.009s 00:01:35.773 user 0m13.209s 00:01:35.773 sys 0m16.269s 00:01:35.773 13:08:36 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:35.773 13:08:36 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:35.773 ************************************ 00:01:35.773 END TEST autobuild_llvm_precompile 00:01:35.773 ************************************ 00:01:35.773 13:08:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:35.773 13:08:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:35.773 13:08:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:35.773 13:08:36 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:35.773 13:08:36 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:35.773 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:35.773 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:35.773 Using 'verbs' RDMA provider 00:01:48.257 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:00.479 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:00.479 Creating mk/config.mk...done. 00:02:00.479 Creating mk/cc.flags.mk...done. 00:02:00.479 Type 'make' to build. 00:02:00.479 13:09:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:00.479 13:09:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:00.479 13:09:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:00.479 13:09:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.479 ************************************ 00:02:00.479 START TEST make 00:02:00.479 ************************************ 00:02:00.479 13:09:02 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:00.479 make[1]: Nothing to be done for 'all'. 00:02:02.389 The Meson build system 00:02:02.389 Version: 1.5.0 00:02:02.389 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:02.389 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.389 Build type: native build 00:02:02.389 Project name: libvfio-user 00:02:02.389 Project version: 0.0.1 00:02:02.389 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:02.389 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:02.389 Host machine cpu family: x86_64 00:02:02.389 Host machine cpu: x86_64 00:02:02.389 Run-time dependency threads found: YES 00:02:02.389 Library dl found: YES 00:02:02.389 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.389 Run-time dependency json-c found: YES 0.17 00:02:02.389 Run-time dependency cmocka found: YES 1.1.7 00:02:02.389 Program pytest-3 found: NO 00:02:02.389 Program flake8 found: NO 00:02:02.389 Program misspell-fixer found: NO 00:02:02.389 Program restructuredtext-lint found: NO 00:02:02.389 Program valgrind found: YES (/usr/bin/valgrind) 00:02:02.389 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.389 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.389 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.389 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:02.389 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:02.389 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:02.389 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:02.389 Build targets in project: 8 00:02:02.389 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:02.389 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:02.389 00:02:02.389 libvfio-user 0.0.1 00:02:02.389 00:02:02.389 User defined options 00:02:02.389 buildtype : debug 00:02:02.389 default_library: static 00:02:02.389 libdir : /usr/local/lib 00:02:02.389 00:02:02.389 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.389 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:02.389 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:02.389 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:02.389 [3/36] Compiling C object samples/null.p/null.c.o 00:02:02.389 [4/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:02.389 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:02.389 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:02.389 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:02.389 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:02.389 [9/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:02.389 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:02.389 [11/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:02.389 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:02.389 [13/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:02.389 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:02.389 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:02.389 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:02.389 [17/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:02.389 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:02.389 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:02.389 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:02.389 [21/36] Compiling C object samples/server.p/server.c.o 00:02:02.389 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:02.389 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:02.389 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:02.389 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:02.389 [26/36] Compiling C object samples/client.p/client.c.o 00:02:02.650 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:02.650 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:02.650 [29/36] Linking target samples/client 00:02:02.650 [30/36] Linking static target lib/libvfio-user.a 00:02:02.650 [31/36] Linking target test/unit_tests 00:02:02.650 [32/36] Linking target samples/null 00:02:02.650 [33/36] Linking target samples/server 00:02:02.650 [34/36] Linking target samples/shadow_ioeventfd_server 00:02:02.650 [35/36] Linking target samples/lspci 00:02:02.650 [36/36] Linking target samples/gpio-pci-idio-16 00:02:02.650 INFO: autodetecting backend as ninja 00:02:02.650 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.650 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.911 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:03.171 ninja: no work to do. 00:02:08.459 The Meson build system 00:02:08.459 Version: 1.5.0 00:02:08.459 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:08.459 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:08.459 Build type: native build 00:02:08.459 Program cat found: YES (/usr/bin/cat) 00:02:08.459 Project name: DPDK 00:02:08.459 Project version: 24.03.0 00:02:08.459 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:08.459 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:08.459 Host machine cpu family: x86_64 00:02:08.459 Host machine cpu: x86_64 00:02:08.459 Message: ## Building in Developer Mode ## 00:02:08.459 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.459 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.459 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.459 Program python3 found: YES (/usr/bin/python3) 00:02:08.459 Program cat found: YES (/usr/bin/cat) 00:02:08.459 Compiler for C supports arguments -march=native: YES 00:02:08.459 Checking for size of "void *" : 8 00:02:08.459 Checking for size of "void *" : 8 (cached) 00:02:08.459 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.459 Library m found: YES 00:02:08.459 Library numa found: YES 00:02:08.459 Has header "numaif.h" : YES 00:02:08.459 Library fdt found: NO 00:02:08.459 Library execinfo found: NO 00:02:08.459 Has header "execinfo.h" : YES 00:02:08.459 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.459 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.459 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.459 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.459 Run-time dependency openssl found: YES 3.1.1 00:02:08.459 Run-time dependency libpcap found: YES 1.10.4 00:02:08.459 Has header "pcap.h" with dependency libpcap: YES 00:02:08.459 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.459 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.459 Compiler for C supports arguments -Wformat: YES 00:02:08.459 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:08.459 Compiler for C supports arguments -Wformat-security: YES 00:02:08.459 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.459 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.459 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.459 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.459 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.459 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.459 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.459 Compiler for C supports arguments -Wundef: YES 00:02:08.459 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.459 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.459 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:08.459 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.459 Program objdump found: YES (/usr/bin/objdump) 00:02:08.459 Compiler for C supports arguments -mavx512f: YES 00:02:08.459 Checking if "AVX512 checking" compiles: YES 00:02:08.459 Fetching value of define "__SSE4_2__" : 1 00:02:08.459 Fetching value of define "__AES__" : 1 00:02:08.459 Fetching value of define "__AVX__" : 1 00:02:08.459 Fetching value of define "__AVX2__" : 1 00:02:08.459 Fetching value of define "__AVX512BW__" : 1 00:02:08.459 Fetching value of define "__AVX512CD__" : 1 00:02:08.459 Fetching value of define "__AVX512DQ__" : 1 00:02:08.459 Fetching value of define "__AVX512F__" : 1 00:02:08.459 Fetching value of define "__AVX512VL__" : 1 00:02:08.459 Fetching value of define "__PCLMUL__" : 1 00:02:08.459 Fetching value of define "__RDRND__" : 1 00:02:08.459 Fetching value of define "__RDSEED__" : 1 00:02:08.459 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.459 Fetching value of define "__znver1__" : (undefined) 00:02:08.459 Fetching value of define "__znver2__" : (undefined) 00:02:08.459 Fetching value of define "__znver3__" : (undefined) 00:02:08.459 Fetching value of define "__znver4__" : (undefined) 00:02:08.459 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:08.459 Message: lib/log: Defining dependency "log" 00:02:08.459 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.459 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.459 Checking for function "getentropy" : NO 00:02:08.459 Message: lib/eal: Defining dependency "eal" 00:02:08.459 Message: lib/ring: Defining dependency "ring" 00:02:08.459 Message: lib/rcu: Defining dependency "rcu" 00:02:08.459 Message: lib/mempool: Defining dependency "mempool" 00:02:08.459 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.459 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.459 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.459 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.459 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.459 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.459 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.459 Compiler for C supports arguments -mpclmul: YES 00:02:08.459 Compiler for C supports arguments -maes: YES 00:02:08.459 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.459 Compiler for C supports arguments -mavx512bw: YES 00:02:08.459 Compiler for C supports arguments -mavx512dq: YES 00:02:08.459 Compiler for C supports arguments -mavx512vl: YES 00:02:08.459 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.459 Compiler for C supports arguments -mavx2: YES 00:02:08.460 Compiler for C supports arguments -mavx: YES 00:02:08.460 Message: lib/net: Defining dependency "net" 00:02:08.460 Message: lib/meter: Defining dependency "meter" 00:02:08.460 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.460 Message: lib/pci: Defining dependency "pci" 00:02:08.460 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.460 Message: lib/hash: Defining dependency "hash" 00:02:08.460 Message: lib/timer: Defining dependency "timer" 00:02:08.460 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.460 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.460 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.460 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.460 Message: lib/power: Defining dependency "power" 00:02:08.460 Message: lib/reorder: Defining dependency "reorder" 00:02:08.460 Message: lib/security: Defining dependency "security" 00:02:08.460 Has header "linux/userfaultfd.h" : YES 00:02:08.460 Has header "linux/vduse.h" : YES 00:02:08.460 Message: lib/vhost: Defining dependency "vhost" 00:02:08.460 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:08.460 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.460 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.460 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.460 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.460 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.460 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.460 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.460 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.460 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.460 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.460 Configuring doxy-api-html.conf using configuration 00:02:08.460 Configuring doxy-api-man.conf using configuration 00:02:08.460 Program mandb found: YES (/usr/bin/mandb) 00:02:08.460 Program sphinx-build found: NO 00:02:08.460 Configuring rte_build_config.h using configuration 00:02:08.460 Message: 00:02:08.460 ================= 00:02:08.460 Applications Enabled 00:02:08.460 ================= 00:02:08.460 00:02:08.460 apps: 00:02:08.460 00:02:08.460 00:02:08.460 Message: 00:02:08.460 ================= 00:02:08.460 Libraries Enabled 00:02:08.460 ================= 00:02:08.460 00:02:08.460 libs: 00:02:08.460 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.460 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.460 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.460 00:02:08.460 Message: 00:02:08.460 =============== 00:02:08.460 Drivers Enabled 00:02:08.460 =============== 00:02:08.460 00:02:08.460 common: 00:02:08.460 00:02:08.460 bus: 00:02:08.460 pci, vdev, 00:02:08.460 mempool: 00:02:08.460 ring, 00:02:08.460 dma: 00:02:08.460 00:02:08.460 net: 00:02:08.460 00:02:08.460 crypto: 00:02:08.460 00:02:08.460 compress: 00:02:08.460 00:02:08.460 vdpa: 00:02:08.460 00:02:08.460 00:02:08.460 Message: 00:02:08.460 ================= 00:02:08.460 Content Skipped 00:02:08.460 ================= 00:02:08.460 00:02:08.460 apps: 00:02:08.460 dumpcap: explicitly disabled via build config 00:02:08.460 graph: explicitly disabled via build config 00:02:08.460 pdump: explicitly disabled via build config 00:02:08.460 proc-info: explicitly disabled via build config 00:02:08.460 test-acl: explicitly disabled via build config 00:02:08.460 test-bbdev: explicitly disabled via build config 00:02:08.460 test-cmdline: explicitly disabled via build config 00:02:08.460 test-compress-perf: explicitly disabled via build config 00:02:08.460 test-crypto-perf: explicitly disabled via build config 00:02:08.460 test-dma-perf: explicitly disabled via build config 00:02:08.460 test-eventdev: explicitly disabled via build config 00:02:08.460 test-fib: explicitly disabled via build config 00:02:08.460 test-flow-perf: explicitly disabled via build config 00:02:08.460 test-gpudev: explicitly disabled via build config 00:02:08.460 test-mldev: explicitly disabled via build config 00:02:08.460 test-pipeline: explicitly disabled via build config 00:02:08.460 test-pmd: explicitly disabled via build config 00:02:08.460 test-regex: explicitly disabled via build config 00:02:08.460 test-sad: explicitly disabled via build config 00:02:08.460 test-security-perf: explicitly disabled via build config 00:02:08.460 00:02:08.460 libs: 00:02:08.460 argparse: explicitly disabled via build config 00:02:08.460 metrics: explicitly disabled via build config 00:02:08.460 acl: explicitly disabled via build config 00:02:08.460 bbdev: explicitly disabled via build config 00:02:08.460 bitratestats: explicitly disabled via build config 00:02:08.460 bpf: explicitly disabled via build config 00:02:08.460 cfgfile: explicitly disabled via build config 00:02:08.460 distributor: explicitly disabled via build config 00:02:08.460 efd: explicitly disabled via build config 00:02:08.460 eventdev: explicitly disabled via build config 00:02:08.460 dispatcher: explicitly disabled via build config 00:02:08.460 gpudev: explicitly disabled via build config 00:02:08.460 gro: explicitly disabled via build config 00:02:08.460 gso: explicitly disabled via build config 00:02:08.460 ip_frag: explicitly disabled via build config 00:02:08.460 jobstats: explicitly disabled via build config 00:02:08.460 latencystats: explicitly disabled via build config 00:02:08.460 lpm: explicitly disabled via build config 00:02:08.460 member: explicitly disabled via build config 00:02:08.460 pcapng: explicitly disabled via build config 00:02:08.460 rawdev: explicitly disabled via build config 00:02:08.460 regexdev: explicitly disabled via build config 00:02:08.460 mldev: explicitly disabled via build config 00:02:08.460 rib: explicitly disabled via build config 00:02:08.460 sched: explicitly disabled via build config 00:02:08.460 stack: explicitly disabled via build config 00:02:08.460 ipsec: explicitly disabled via build config 00:02:08.460 pdcp: explicitly disabled via build config 00:02:08.460 fib: explicitly disabled via build config 00:02:08.460 port: explicitly disabled via build config 00:02:08.460 pdump: explicitly disabled via build config 00:02:08.460 table: explicitly disabled via build config 00:02:08.460 pipeline: explicitly disabled via build config 00:02:08.460 graph: explicitly disabled via build config 00:02:08.460 node: explicitly disabled via build config 00:02:08.460 00:02:08.460 drivers: 00:02:08.460 common/cpt: not in enabled drivers build config 00:02:08.460 common/dpaax: not in enabled drivers build config 00:02:08.460 common/iavf: not in enabled drivers build config 00:02:08.460 common/idpf: not in enabled drivers build config 00:02:08.460 common/ionic: not in enabled drivers build config 00:02:08.460 common/mvep: not in enabled drivers build config 00:02:08.460 common/octeontx: not in enabled drivers build config 00:02:08.460 bus/auxiliary: not in enabled drivers build config 00:02:08.460 bus/cdx: not in enabled drivers build config 00:02:08.460 bus/dpaa: not in enabled drivers build config 00:02:08.460 bus/fslmc: not in enabled drivers build config 00:02:08.460 bus/ifpga: not in enabled drivers build config 00:02:08.460 bus/platform: not in enabled drivers build config 00:02:08.460 bus/uacce: not in enabled drivers build config 00:02:08.460 bus/vmbus: not in enabled drivers build config 00:02:08.460 common/cnxk: not in enabled drivers build config 00:02:08.460 common/mlx5: not in enabled drivers build config 00:02:08.460 common/nfp: not in enabled drivers build config 00:02:08.460 common/nitrox: not in enabled drivers build config 00:02:08.460 common/qat: not in enabled drivers build config 00:02:08.460 common/sfc_efx: not in enabled drivers build config 00:02:08.460 mempool/bucket: not in enabled drivers build config 00:02:08.460 mempool/cnxk: not in enabled drivers build config 00:02:08.460 mempool/dpaa: not in enabled drivers build config 00:02:08.460 mempool/dpaa2: not in enabled drivers build config 00:02:08.460 mempool/octeontx: not in enabled drivers build config 00:02:08.460 mempool/stack: not in enabled drivers build config 00:02:08.460 dma/cnxk: not in enabled drivers build config 00:02:08.460 dma/dpaa: not in enabled drivers build config 00:02:08.460 dma/dpaa2: not in enabled drivers build config 00:02:08.460 dma/hisilicon: not in enabled drivers build config 00:02:08.460 dma/idxd: not in enabled drivers build config 00:02:08.460 dma/ioat: not in enabled drivers build config 00:02:08.460 dma/skeleton: not in enabled drivers build config 00:02:08.460 net/af_packet: not in enabled drivers build config 00:02:08.460 net/af_xdp: not in enabled drivers build config 00:02:08.460 net/ark: not in enabled drivers build config 00:02:08.460 net/atlantic: not in enabled drivers build config 00:02:08.460 net/avp: not in enabled drivers build config 00:02:08.460 net/axgbe: not in enabled drivers build config 00:02:08.460 net/bnx2x: not in enabled drivers build config 00:02:08.460 net/bnxt: not in enabled drivers build config 00:02:08.460 net/bonding: not in enabled drivers build config 00:02:08.460 net/cnxk: not in enabled drivers build config 00:02:08.460 net/cpfl: not in enabled drivers build config 00:02:08.460 net/cxgbe: not in enabled drivers build config 00:02:08.460 net/dpaa: not in enabled drivers build config 00:02:08.460 net/dpaa2: not in enabled drivers build config 00:02:08.460 net/e1000: not in enabled drivers build config 00:02:08.460 net/ena: not in enabled drivers build config 00:02:08.460 net/enetc: not in enabled drivers build config 00:02:08.460 net/enetfec: not in enabled drivers build config 00:02:08.460 net/enic: not in enabled drivers build config 00:02:08.460 net/failsafe: not in enabled drivers build config 00:02:08.460 net/fm10k: not in enabled drivers build config 00:02:08.460 net/gve: not in enabled drivers build config 00:02:08.460 net/hinic: not in enabled drivers build config 00:02:08.460 net/hns3: not in enabled drivers build config 00:02:08.460 net/i40e: not in enabled drivers build config 00:02:08.460 net/iavf: not in enabled drivers build config 00:02:08.460 net/ice: not in enabled drivers build config 00:02:08.460 net/idpf: not in enabled drivers build config 00:02:08.460 net/igc: not in enabled drivers build config 00:02:08.460 net/ionic: not in enabled drivers build config 00:02:08.460 net/ipn3ke: not in enabled drivers build config 00:02:08.460 net/ixgbe: not in enabled drivers build config 00:02:08.460 net/mana: not in enabled drivers build config 00:02:08.460 net/memif: not in enabled drivers build config 00:02:08.460 net/mlx4: not in enabled drivers build config 00:02:08.460 net/mlx5: not in enabled drivers build config 00:02:08.460 net/mvneta: not in enabled drivers build config 00:02:08.461 net/mvpp2: not in enabled drivers build config 00:02:08.461 net/netvsc: not in enabled drivers build config 00:02:08.461 net/nfb: not in enabled drivers build config 00:02:08.461 net/nfp: not in enabled drivers build config 00:02:08.461 net/ngbe: not in enabled drivers build config 00:02:08.461 net/null: not in enabled drivers build config 00:02:08.461 net/octeontx: not in enabled drivers build config 00:02:08.461 net/octeon_ep: not in enabled drivers build config 00:02:08.461 net/pcap: not in enabled drivers build config 00:02:08.461 net/pfe: not in enabled drivers build config 00:02:08.461 net/qede: not in enabled drivers build config 00:02:08.461 net/ring: not in enabled drivers build config 00:02:08.461 net/sfc: not in enabled drivers build config 00:02:08.461 net/softnic: not in enabled drivers build config 00:02:08.461 net/tap: not in enabled drivers build config 00:02:08.461 net/thunderx: not in enabled drivers build config 00:02:08.461 net/txgbe: not in enabled drivers build config 00:02:08.461 net/vdev_netvsc: not in enabled drivers build config 00:02:08.461 net/vhost: not in enabled drivers build config 00:02:08.461 net/virtio: not in enabled drivers build config 00:02:08.461 net/vmxnet3: not in enabled drivers build config 00:02:08.461 raw/*: missing internal dependency, "rawdev" 00:02:08.461 crypto/armv8: not in enabled drivers build config 00:02:08.461 crypto/bcmfs: not in enabled drivers build config 00:02:08.461 crypto/caam_jr: not in enabled drivers build config 00:02:08.461 crypto/ccp: not in enabled drivers build config 00:02:08.461 crypto/cnxk: not in enabled drivers build config 00:02:08.461 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.461 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.461 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.461 crypto/mlx5: not in enabled drivers build config 00:02:08.461 crypto/mvsam: not in enabled drivers build config 00:02:08.461 crypto/nitrox: not in enabled drivers build config 00:02:08.461 crypto/null: not in enabled drivers build config 00:02:08.461 crypto/octeontx: not in enabled drivers build config 00:02:08.461 crypto/openssl: not in enabled drivers build config 00:02:08.461 crypto/scheduler: not in enabled drivers build config 00:02:08.461 crypto/uadk: not in enabled drivers build config 00:02:08.461 crypto/virtio: not in enabled drivers build config 00:02:08.461 compress/isal: not in enabled drivers build config 00:02:08.461 compress/mlx5: not in enabled drivers build config 00:02:08.461 compress/nitrox: not in enabled drivers build config 00:02:08.461 compress/octeontx: not in enabled drivers build config 00:02:08.461 compress/zlib: not in enabled drivers build config 00:02:08.461 regex/*: missing internal dependency, "regexdev" 00:02:08.461 ml/*: missing internal dependency, "mldev" 00:02:08.461 vdpa/ifc: not in enabled drivers build config 00:02:08.461 vdpa/mlx5: not in enabled drivers build config 00:02:08.461 vdpa/nfp: not in enabled drivers build config 00:02:08.461 vdpa/sfc: not in enabled drivers build config 00:02:08.461 event/*: missing internal dependency, "eventdev" 00:02:08.461 baseband/*: missing internal dependency, "bbdev" 00:02:08.461 gpu/*: missing internal dependency, "gpudev" 00:02:08.461 00:02:08.461 00:02:08.722 Build targets in project: 85 00:02:08.722 00:02:08.722 DPDK 24.03.0 00:02:08.722 00:02:08.722 User defined options 00:02:08.722 buildtype : debug 00:02:08.722 default_library : static 00:02:08.722 libdir : lib 00:02:08.722 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:08.722 c_args : -fPIC -Werror 00:02:08.722 c_link_args : 00:02:08.722 cpu_instruction_set: native 00:02:08.722 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:08.722 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:08.722 enable_docs : false 00:02:08.722 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.722 enable_kmods : false 00:02:08.722 max_lcores : 128 00:02:08.722 tests : false 00:02:08.722 00:02:08.722 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.982 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:09.247 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.247 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.247 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.247 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.247 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.247 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.247 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.247 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.247 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.247 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.247 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.247 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.247 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.247 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.247 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.247 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.247 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.247 [18/268] Linking static target lib/librte_kvargs.a 00:02:09.247 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.247 [20/268] Linking static target lib/librte_log.a 00:02:09.247 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.247 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.247 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.247 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.247 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.247 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.247 [27/268] Linking static target lib/librte_pci.a 00:02:09.247 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.247 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.247 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.247 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.247 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.247 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.510 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.510 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.771 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.771 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.771 [38/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.771 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.771 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.771 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.771 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.771 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.771 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.771 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.771 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.771 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.771 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:09.771 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.771 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.771 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.771 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.771 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.771 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.771 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.771 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.771 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.771 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.771 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.771 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.771 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.771 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.771 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.771 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:09.771 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.771 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.771 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.771 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.771 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.771 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.771 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.771 [72/268] Linking static target lib/librte_telemetry.a 00:02:09.771 [73/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.771 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:09.771 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.771 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.771 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.771 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.771 [79/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.771 [80/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.771 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.771 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.771 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.771 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.771 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:09.771 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:09.771 [87/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.771 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.771 [89/268] Linking static target lib/librte_meter.a 00:02:09.771 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.771 [91/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.771 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.771 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.771 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.771 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.771 [96/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.771 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:09.771 [98/268] Linking static target lib/librte_ring.a 00:02:09.771 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.771 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.771 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.771 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.771 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.771 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.771 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.771 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.771 [107/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.771 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.771 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.771 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.771 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.771 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.771 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.771 [114/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.771 [115/268] Linking static target lib/librte_cmdline.a 00:02:09.771 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:09.771 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.771 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.771 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.771 [120/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.771 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.771 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.771 [123/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.771 [124/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.771 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.771 [126/268] Linking static target lib/librte_net.a 00:02:09.771 [127/268] Linking static target lib/librte_mempool.a 00:02:09.771 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.771 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.771 [130/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.771 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.771 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.771 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.032 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.032 [135/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.032 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.032 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.032 [138/268] Linking static target lib/librte_dmadev.a 00:02:10.032 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.032 [140/268] Linking static target lib/librte_rcu.a 00:02:10.032 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.032 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.032 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.032 [144/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.032 [145/268] Linking static target lib/librte_mbuf.a 00:02:10.032 [146/268] Linking static target lib/librte_compressdev.a 00:02:10.032 [147/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.032 [148/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.032 [149/268] Linking static target lib/librte_timer.a 00:02:10.032 [150/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.032 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.032 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.032 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.032 [154/268] Linking static target lib/librte_eal.a 00:02:10.032 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.032 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:10.032 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.032 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.032 [159/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.032 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.032 [161/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.032 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.032 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.032 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.032 [165/268] Linking static target lib/librte_hash.a 00:02:10.032 [166/268] Linking target lib/librte_log.so.24.1 00:02:10.032 [167/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.032 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.032 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.032 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.293 [171/268] Linking static target lib/librte_reorder.a 00:02:10.293 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.293 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.293 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:10.293 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.293 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.293 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.293 [178/268] Linking static target lib/librte_power.a 00:02:10.293 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.293 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.293 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.293 [182/268] Linking static target lib/librte_cryptodev.a 00:02:10.293 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:10.293 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.293 [185/268] Linking static target lib/librte_security.a 00:02:10.293 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.293 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.293 [188/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.293 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.293 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:10.293 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.293 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.293 [193/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.293 [194/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.293 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.293 [196/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.293 [197/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.293 [198/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.553 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.553 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.553 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.553 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.553 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.553 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.553 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:10.554 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.554 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:10.554 [208/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.554 [209/268] Linking static target drivers/librte_bus_vdev.a 00:02:10.554 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.554 [211/268] Linking static target lib/librte_ethdev.a 00:02:10.554 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.554 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.554 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.554 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:10.554 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.554 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.554 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.074 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.074 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.074 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.074 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.074 [227/268] Linking static target lib/librte_vhost.a 00:02:11.335 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.335 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.715 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.286 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.421 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.334 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.334 [234/268] Linking target lib/librte_eal.so.24.1 00:02:23.334 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.594 [236/268] Linking target lib/librte_meter.so.24.1 00:02:23.594 [237/268] Linking target lib/librte_ring.so.24.1 00:02:23.594 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:23.594 [239/268] Linking target lib/librte_pci.so.24.1 00:02:23.594 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.594 [241/268] Linking target lib/librte_timer.so.24.1 00:02:23.594 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:23.594 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:23.594 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:23.594 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:23.594 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:23.594 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:23.594 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:23.855 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:23.855 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:23.855 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:23.855 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:23.855 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.116 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.116 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:24.116 [256/268] Linking target lib/librte_net.so.24.1 00:02:24.116 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.116 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.116 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.376 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.376 [261/268] Linking target lib/librte_hash.so.24.1 00:02:24.376 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.376 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:24.376 [264/268] Linking target lib/librte_security.so.24.1 00:02:24.376 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.376 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:24.636 [267/268] Linking target lib/librte_power.so.24.1 00:02:24.636 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:24.636 INFO: autodetecting backend as ninja 00:02:24.636 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:25.576 CC lib/ut_mock/mock.o 00:02:25.576 CC lib/log/log.o 00:02:25.576 CC lib/log/log_flags.o 00:02:25.576 CC lib/log/log_deprecated.o 00:02:25.576 CC lib/ut/ut.o 00:02:25.576 LIB libspdk_ut_mock.a 00:02:25.576 LIB libspdk_ut.a 00:02:25.576 LIB libspdk_log.a 00:02:26.147 CC lib/ioat/ioat.o 00:02:26.147 CC lib/util/base64.o 00:02:26.147 CC lib/util/bit_array.o 00:02:26.147 CC lib/util/crc16.o 00:02:26.147 CC lib/util/cpuset.o 00:02:26.147 CC lib/util/crc32.o 00:02:26.147 CC lib/util/crc32c.o 00:02:26.147 CC lib/util/crc32_ieee.o 00:02:26.147 CC lib/util/crc64.o 00:02:26.147 CC lib/util/dif.o 00:02:26.147 CC lib/util/fd.o 00:02:26.147 CC lib/dma/dma.o 00:02:26.147 CC lib/util/fd_group.o 00:02:26.147 CC lib/util/file.o 00:02:26.147 CC lib/util/hexlify.o 00:02:26.147 CC lib/util/iov.o 00:02:26.147 CC lib/util/math.o 00:02:26.147 CC lib/util/net.o 00:02:26.147 CC lib/util/pipe.o 00:02:26.147 CC lib/util/strerror_tls.o 00:02:26.147 CC lib/util/string.o 00:02:26.147 CXX lib/trace_parser/trace.o 00:02:26.147 CC lib/util/uuid.o 00:02:26.147 CC lib/util/xor.o 00:02:26.147 CC lib/util/zipf.o 00:02:26.147 CC lib/util/md5.o 00:02:26.147 CC lib/vfio_user/host/vfio_user.o 00:02:26.147 LIB libspdk_dma.a 00:02:26.147 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.147 LIB libspdk_ioat.a 00:02:26.407 LIB libspdk_vfio_user.a 00:02:26.407 LIB libspdk_util.a 00:02:26.408 LIB libspdk_trace_parser.a 00:02:26.667 CC lib/idxd/idxd.o 00:02:26.667 CC lib/idxd/idxd_user.o 00:02:26.667 CC lib/idxd/idxd_kernel.o 00:02:26.667 CC lib/rdma_utils/rdma_utils.o 00:02:26.667 CC lib/conf/conf.o 00:02:26.667 CC lib/json/json_parse.o 00:02:26.667 CC lib/json/json_util.o 00:02:26.667 CC lib/json/json_write.o 00:02:26.667 CC lib/vmd/vmd.o 00:02:26.667 CC lib/vmd/led.o 00:02:26.667 CC lib/env_dpdk/env.o 00:02:26.667 CC lib/env_dpdk/memory.o 00:02:26.667 CC lib/env_dpdk/pci.o 00:02:26.667 CC lib/env_dpdk/init.o 00:02:26.667 CC lib/env_dpdk/threads.o 00:02:26.667 CC lib/env_dpdk/pci_ioat.o 00:02:26.667 CC lib/env_dpdk/pci_virtio.o 00:02:26.667 CC lib/env_dpdk/pci_vmd.o 00:02:26.667 CC lib/env_dpdk/pci_idxd.o 00:02:26.667 CC lib/env_dpdk/pci_event.o 00:02:26.667 CC lib/env_dpdk/sigbus_handler.o 00:02:26.667 CC lib/env_dpdk/pci_dpdk.o 00:02:26.667 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.667 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.667 LIB libspdk_conf.a 00:02:26.927 LIB libspdk_rdma_utils.a 00:02:26.927 LIB libspdk_json.a 00:02:26.927 LIB libspdk_idxd.a 00:02:26.927 LIB libspdk_vmd.a 00:02:27.187 CC lib/rdma_provider/common.o 00:02:27.187 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:27.187 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.187 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.187 CC lib/jsonrpc/jsonrpc_client.o 00:02:27.187 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.187 LIB libspdk_rdma_provider.a 00:02:27.447 LIB libspdk_jsonrpc.a 00:02:27.708 LIB libspdk_env_dpdk.a 00:02:27.708 CC lib/rpc/rpc.o 00:02:27.708 LIB libspdk_rpc.a 00:02:27.968 CC lib/keyring/keyring.o 00:02:27.968 CC lib/keyring/keyring_rpc.o 00:02:27.968 CC lib/trace/trace.o 00:02:27.968 CC lib/trace/trace_flags.o 00:02:27.968 CC lib/trace/trace_rpc.o 00:02:27.968 CC lib/notify/notify.o 00:02:27.968 CC lib/notify/notify_rpc.o 00:02:28.229 LIB libspdk_notify.a 00:02:28.229 LIB libspdk_keyring.a 00:02:28.229 LIB libspdk_trace.a 00:02:28.490 CC lib/thread/thread.o 00:02:28.490 CC lib/thread/iobuf.o 00:02:28.490 CC lib/sock/sock.o 00:02:28.490 CC lib/sock/sock_rpc.o 00:02:28.751 LIB libspdk_sock.a 00:02:29.323 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.323 CC lib/nvme/nvme_ctrlr.o 00:02:29.323 CC lib/nvme/nvme_fabric.o 00:02:29.323 CC lib/nvme/nvme_ns_cmd.o 00:02:29.323 CC lib/nvme/nvme_ns.o 00:02:29.323 CC lib/nvme/nvme_pcie_common.o 00:02:29.323 CC lib/nvme/nvme.o 00:02:29.323 CC lib/nvme/nvme_pcie.o 00:02:29.323 CC lib/nvme/nvme_qpair.o 00:02:29.323 CC lib/nvme/nvme_quirks.o 00:02:29.323 CC lib/nvme/nvme_transport.o 00:02:29.323 CC lib/nvme/nvme_discovery.o 00:02:29.323 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.323 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.323 CC lib/nvme/nvme_tcp.o 00:02:29.323 CC lib/nvme/nvme_opal.o 00:02:29.323 CC lib/nvme/nvme_io_msg.o 00:02:29.323 CC lib/nvme/nvme_poll_group.o 00:02:29.323 CC lib/nvme/nvme_zns.o 00:02:29.323 CC lib/nvme/nvme_stubs.o 00:02:29.323 CC lib/nvme/nvme_auth.o 00:02:29.323 CC lib/nvme/nvme_cuse.o 00:02:29.323 CC lib/nvme/nvme_vfio_user.o 00:02:29.323 CC lib/nvme/nvme_rdma.o 00:02:29.323 LIB libspdk_thread.a 00:02:29.583 CC lib/fsdev/fsdev.o 00:02:29.583 CC lib/fsdev/fsdev_io.o 00:02:29.583 CC lib/fsdev/fsdev_rpc.o 00:02:29.583 CC lib/virtio/virtio.o 00:02:29.583 CC lib/blob/blobstore.o 00:02:29.583 CC lib/virtio/virtio_vhost_user.o 00:02:29.583 CC lib/blob/request.o 00:02:29.583 CC lib/virtio/virtio_pci.o 00:02:29.583 CC lib/blob/zeroes.o 00:02:29.583 CC lib/virtio/virtio_vfio_user.o 00:02:29.583 CC lib/accel/accel.o 00:02:29.583 CC lib/blob/blob_bs_dev.o 00:02:29.583 CC lib/accel/accel_rpc.o 00:02:29.583 CC lib/init/json_config.o 00:02:29.583 CC lib/accel/accel_sw.o 00:02:29.583 CC lib/init/subsystem.o 00:02:29.583 CC lib/init/subsystem_rpc.o 00:02:29.583 CC lib/init/rpc.o 00:02:29.583 CC lib/vfu_tgt/tgt_endpoint.o 00:02:29.583 CC lib/vfu_tgt/tgt_rpc.o 00:02:29.843 LIB libspdk_init.a 00:02:29.843 LIB libspdk_virtio.a 00:02:29.843 LIB libspdk_vfu_tgt.a 00:02:30.103 LIB libspdk_fsdev.a 00:02:30.103 CC lib/event/app.o 00:02:30.103 CC lib/event/reactor.o 00:02:30.103 CC lib/event/log_rpc.o 00:02:30.103 CC lib/event/app_rpc.o 00:02:30.103 CC lib/event/scheduler_static.o 00:02:30.363 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:30.363 LIB libspdk_accel.a 00:02:30.363 LIB libspdk_event.a 00:02:30.363 LIB libspdk_nvme.a 00:02:30.623 CC lib/bdev/bdev.o 00:02:30.623 CC lib/bdev/bdev_rpc.o 00:02:30.623 CC lib/bdev/bdev_zone.o 00:02:30.623 CC lib/bdev/part.o 00:02:30.623 CC lib/bdev/scsi_nvme.o 00:02:30.623 LIB libspdk_fuse_dispatcher.a 00:02:31.194 LIB libspdk_blob.a 00:02:31.766 CC lib/lvol/lvol.o 00:02:31.766 CC lib/blobfs/blobfs.o 00:02:31.766 CC lib/blobfs/tree.o 00:02:32.027 LIB libspdk_lvol.a 00:02:32.027 LIB libspdk_blobfs.a 00:02:32.288 LIB libspdk_bdev.a 00:02:32.859 CC lib/scsi/dev.o 00:02:32.859 CC lib/scsi/lun.o 00:02:32.859 CC lib/scsi/port.o 00:02:32.859 CC lib/scsi/scsi.o 00:02:32.859 CC lib/ftl/ftl_core.o 00:02:32.859 CC lib/ftl/ftl_init.o 00:02:32.859 CC lib/scsi/scsi_bdev.o 00:02:32.859 CC lib/ftl/ftl_layout.o 00:02:32.859 CC lib/scsi/scsi_pr.o 00:02:32.859 CC lib/ftl/ftl_debug.o 00:02:32.859 CC lib/ftl/ftl_io.o 00:02:32.859 CC lib/scsi/scsi_rpc.o 00:02:32.859 CC lib/scsi/task.o 00:02:32.859 CC lib/nbd/nbd.o 00:02:32.859 CC lib/ftl/ftl_sb.o 00:02:32.859 CC lib/nbd/nbd_rpc.o 00:02:32.859 CC lib/ftl/ftl_l2p.o 00:02:32.859 CC lib/ftl/ftl_l2p_flat.o 00:02:32.859 CC lib/ftl/ftl_nv_cache.o 00:02:32.859 CC lib/ublk/ublk.o 00:02:32.859 CC lib/ftl/ftl_band.o 00:02:32.859 CC lib/ublk/ublk_rpc.o 00:02:32.859 CC lib/ftl/ftl_band_ops.o 00:02:32.859 CC lib/ftl/ftl_writer.o 00:02:32.859 CC lib/ftl/ftl_rq.o 00:02:32.859 CC lib/nvmf/ctrlr.o 00:02:32.859 CC lib/ftl/ftl_reloc.o 00:02:32.859 CC lib/nvmf/ctrlr_discovery.o 00:02:32.859 CC lib/ftl/ftl_l2p_cache.o 00:02:32.859 CC lib/nvmf/ctrlr_bdev.o 00:02:32.859 CC lib/nvmf/subsystem.o 00:02:32.859 CC lib/ftl/ftl_p2l.o 00:02:32.859 CC lib/nvmf/nvmf.o 00:02:32.859 CC lib/ftl/ftl_p2l_log.o 00:02:32.859 CC lib/nvmf/nvmf_rpc.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.859 CC lib/nvmf/transport.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.859 CC lib/nvmf/tcp.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.859 CC lib/nvmf/stubs.o 00:02:32.859 CC lib/nvmf/mdns_server.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.859 CC lib/nvmf/vfio_user.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.859 CC lib/nvmf/rdma.o 00:02:32.859 CC lib/nvmf/auth.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.859 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.859 CC lib/ftl/utils/ftl_conf.o 00:02:32.859 CC lib/ftl/utils/ftl_md.o 00:02:32.859 CC lib/ftl/utils/ftl_mempool.o 00:02:32.859 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.859 CC lib/ftl/utils/ftl_property.o 00:02:32.859 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.859 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:32.859 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.859 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.859 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.859 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.860 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:32.860 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:32.860 CC lib/ftl/base/ftl_base_dev.o 00:02:32.860 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.860 CC lib/ftl/ftl_trace.o 00:02:33.119 LIB libspdk_nbd.a 00:02:33.119 LIB libspdk_scsi.a 00:02:33.119 LIB libspdk_ublk.a 00:02:33.380 CC lib/iscsi/conn.o 00:02:33.380 CC lib/iscsi/init_grp.o 00:02:33.380 CC lib/iscsi/iscsi.o 00:02:33.380 CC lib/iscsi/param.o 00:02:33.380 CC lib/iscsi/portal_grp.o 00:02:33.380 CC lib/vhost/vhost.o 00:02:33.380 CC lib/iscsi/tgt_node.o 00:02:33.380 CC lib/vhost/vhost_rpc.o 00:02:33.380 CC lib/iscsi/iscsi_subsystem.o 00:02:33.380 CC lib/vhost/vhost_scsi.o 00:02:33.380 CC lib/iscsi/iscsi_rpc.o 00:02:33.380 CC lib/vhost/vhost_blk.o 00:02:33.380 CC lib/iscsi/task.o 00:02:33.380 CC lib/vhost/rte_vhost_user.o 00:02:33.380 LIB libspdk_ftl.a 00:02:33.951 LIB libspdk_nvmf.a 00:02:33.951 LIB libspdk_vhost.a 00:02:34.212 LIB libspdk_iscsi.a 00:02:34.472 CC module/env_dpdk/env_dpdk_rpc.o 00:02:34.472 CC module/vfu_device/vfu_virtio.o 00:02:34.472 CC module/vfu_device/vfu_virtio_blk.o 00:02:34.472 CC module/vfu_device/vfu_virtio_scsi.o 00:02:34.472 CC module/vfu_device/vfu_virtio_rpc.o 00:02:34.472 CC module/vfu_device/vfu_virtio_fs.o 00:02:34.731 LIB libspdk_env_dpdk_rpc.a 00:02:34.731 CC module/accel/ioat/accel_ioat.o 00:02:34.731 CC module/accel/ioat/accel_ioat_rpc.o 00:02:34.731 CC module/accel/dsa/accel_dsa.o 00:02:34.731 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:34.731 CC module/accel/dsa/accel_dsa_rpc.o 00:02:34.731 CC module/accel/iaa/accel_iaa.o 00:02:34.731 CC module/sock/posix/posix.o 00:02:34.731 CC module/accel/iaa/accel_iaa_rpc.o 00:02:34.731 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:34.731 CC module/keyring/file/keyring.o 00:02:34.731 CC module/keyring/file/keyring_rpc.o 00:02:34.731 CC module/keyring/linux/keyring.o 00:02:34.731 CC module/blob/bdev/blob_bdev.o 00:02:34.731 CC module/accel/error/accel_error.o 00:02:34.731 CC module/accel/error/accel_error_rpc.o 00:02:34.731 CC module/scheduler/gscheduler/gscheduler.o 00:02:34.731 CC module/keyring/linux/keyring_rpc.o 00:02:34.731 CC module/fsdev/aio/fsdev_aio.o 00:02:34.731 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:34.731 CC module/fsdev/aio/linux_aio_mgr.o 00:02:34.731 LIB libspdk_keyring_file.a 00:02:34.731 LIB libspdk_keyring_linux.a 00:02:34.731 LIB libspdk_scheduler_dpdk_governor.a 00:02:34.731 LIB libspdk_scheduler_gscheduler.a 00:02:34.731 LIB libspdk_accel_ioat.a 00:02:34.731 LIB libspdk_scheduler_dynamic.a 00:02:34.731 LIB libspdk_accel_iaa.a 00:02:34.731 LIB libspdk_accel_error.a 00:02:34.991 LIB libspdk_blob_bdev.a 00:02:34.991 LIB libspdk_accel_dsa.a 00:02:34.991 LIB libspdk_vfu_device.a 00:02:35.249 LIB libspdk_sock_posix.a 00:02:35.249 LIB libspdk_fsdev_aio.a 00:02:35.249 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.249 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.249 CC module/bdev/gpt/gpt.o 00:02:35.249 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.249 CC module/bdev/malloc/bdev_malloc.o 00:02:35.249 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.249 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.249 CC module/bdev/raid/bdev_raid.o 00:02:35.249 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.249 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.249 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.249 CC module/bdev/raid/raid0.o 00:02:35.249 CC module/bdev/raid/concat.o 00:02:35.249 CC module/bdev/raid/raid1.o 00:02:35.249 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.249 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.249 CC module/bdev/split/vbdev_split.o 00:02:35.249 CC module/bdev/nvme/bdev_nvme.o 00:02:35.249 CC module/bdev/nvme/nvme_rpc.o 00:02:35.249 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.249 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.249 CC module/bdev/error/vbdev_error.o 00:02:35.249 CC module/bdev/delay/vbdev_delay.o 00:02:35.249 CC module/bdev/nvme/vbdev_opal.o 00:02:35.249 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.249 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.249 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.249 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.249 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:35.249 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.249 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.249 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.249 CC module/bdev/ftl/bdev_ftl.o 00:02:35.249 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.249 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.249 CC module/bdev/null/bdev_null.o 00:02:35.249 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.249 CC module/bdev/null/bdev_null_rpc.o 00:02:35.249 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.249 CC module/bdev/aio/bdev_aio.o 00:02:35.249 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.249 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.509 LIB libspdk_blobfs_bdev.a 00:02:35.509 LIB libspdk_bdev_split.a 00:02:35.509 LIB libspdk_bdev_gpt.a 00:02:35.509 LIB libspdk_bdev_error.a 00:02:35.509 LIB libspdk_bdev_null.a 00:02:35.509 LIB libspdk_bdev_ftl.a 00:02:35.509 LIB libspdk_bdev_passthru.a 00:02:35.509 LIB libspdk_bdev_iscsi.a 00:02:35.509 LIB libspdk_bdev_aio.a 00:02:35.509 LIB libspdk_bdev_malloc.a 00:02:35.509 LIB libspdk_bdev_zone_block.a 00:02:35.509 LIB libspdk_bdev_delay.a 00:02:35.769 LIB libspdk_bdev_lvol.a 00:02:35.769 LIB libspdk_bdev_virtio.a 00:02:36.029 LIB libspdk_bdev_raid.a 00:02:36.971 LIB libspdk_bdev_nvme.a 00:02:37.544 CC module/event/subsystems/sock/sock.o 00:02:37.544 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.544 CC module/event/subsystems/keyring/keyring.o 00:02:37.544 CC module/event/subsystems/vmd/vmd.o 00:02:37.544 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.544 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.544 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.544 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.544 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.544 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.544 LIB libspdk_event_keyring.a 00:02:37.544 LIB libspdk_event_vhost_blk.a 00:02:37.544 LIB libspdk_event_vfu_tgt.a 00:02:37.544 LIB libspdk_event_vmd.a 00:02:37.544 LIB libspdk_event_fsdev.a 00:02:37.544 LIB libspdk_event_sock.a 00:02:37.544 LIB libspdk_event_scheduler.a 00:02:37.544 LIB libspdk_event_iobuf.a 00:02:37.805 CC module/event/subsystems/accel/accel.o 00:02:38.065 LIB libspdk_event_accel.a 00:02:38.326 CC module/event/subsystems/bdev/bdev.o 00:02:38.326 LIB libspdk_event_bdev.a 00:02:38.586 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.586 CC module/event/subsystems/nbd/nbd.o 00:02:38.586 CC module/event/subsystems/scsi/scsi.o 00:02:38.586 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.586 CC module/event/subsystems/ublk/ublk.o 00:02:38.846 LIB libspdk_event_nbd.a 00:02:38.846 LIB libspdk_event_ublk.a 00:02:38.846 LIB libspdk_event_scsi.a 00:02:38.846 LIB libspdk_event_nvmf.a 00:02:39.105 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.105 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.364 LIB libspdk_event_vhost_scsi.a 00:02:39.364 LIB libspdk_event_iscsi.a 00:02:39.625 CC test/rpc_client/rpc_client_test.o 00:02:39.625 CC app/trace_record/trace_record.o 00:02:39.625 CXX app/trace/trace.o 00:02:39.625 TEST_HEADER include/spdk/accel_module.h 00:02:39.625 TEST_HEADER include/spdk/accel.h 00:02:39.625 TEST_HEADER include/spdk/assert.h 00:02:39.625 CC app/spdk_nvme_perf/perf.o 00:02:39.625 TEST_HEADER include/spdk/base64.h 00:02:39.625 TEST_HEADER include/spdk/barrier.h 00:02:39.625 TEST_HEADER include/spdk/bdev_module.h 00:02:39.625 CC app/spdk_lspci/spdk_lspci.o 00:02:39.625 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.625 TEST_HEADER include/spdk/bdev.h 00:02:39.625 TEST_HEADER include/spdk/bit_array.h 00:02:39.625 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.625 TEST_HEADER include/spdk/bit_pool.h 00:02:39.625 TEST_HEADER include/spdk/blobfs.h 00:02:39.625 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.625 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.625 CC app/spdk_top/spdk_top.o 00:02:39.625 CC app/spdk_nvme_identify/identify.o 00:02:39.625 TEST_HEADER include/spdk/blob.h 00:02:39.625 TEST_HEADER include/spdk/crc32.h 00:02:39.625 TEST_HEADER include/spdk/conf.h 00:02:39.625 TEST_HEADER include/spdk/crc64.h 00:02:39.625 TEST_HEADER include/spdk/config.h 00:02:39.625 TEST_HEADER include/spdk/cpuset.h 00:02:39.625 TEST_HEADER include/spdk/crc16.h 00:02:39.625 TEST_HEADER include/spdk/dif.h 00:02:39.625 TEST_HEADER include/spdk/endian.h 00:02:39.625 TEST_HEADER include/spdk/dma.h 00:02:39.625 TEST_HEADER include/spdk/env.h 00:02:39.625 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.625 TEST_HEADER include/spdk/fd_group.h 00:02:39.625 TEST_HEADER include/spdk/event.h 00:02:39.625 TEST_HEADER include/spdk/file.h 00:02:39.625 TEST_HEADER include/spdk/fsdev.h 00:02:39.625 TEST_HEADER include/spdk/fd.h 00:02:39.625 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.625 TEST_HEADER include/spdk/ftl.h 00:02:39.625 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:39.625 TEST_HEADER include/spdk/hexlify.h 00:02:39.625 TEST_HEADER include/spdk/histogram_data.h 00:02:39.625 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.625 TEST_HEADER include/spdk/idxd.h 00:02:39.625 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.625 TEST_HEADER include/spdk/init.h 00:02:39.625 CC app/spdk_dd/spdk_dd.o 00:02:39.625 TEST_HEADER include/spdk/ioat.h 00:02:39.625 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.625 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.625 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.625 TEST_HEADER include/spdk/json.h 00:02:39.625 TEST_HEADER include/spdk/keyring.h 00:02:39.626 TEST_HEADER include/spdk/keyring_module.h 00:02:39.626 TEST_HEADER include/spdk/likely.h 00:02:39.626 TEST_HEADER include/spdk/log.h 00:02:39.626 TEST_HEADER include/spdk/lvol.h 00:02:39.626 TEST_HEADER include/spdk/mmio.h 00:02:39.626 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.626 TEST_HEADER include/spdk/md5.h 00:02:39.626 TEST_HEADER include/spdk/memory.h 00:02:39.626 TEST_HEADER include/spdk/net.h 00:02:39.626 TEST_HEADER include/spdk/notify.h 00:02:39.626 TEST_HEADER include/spdk/nvme.h 00:02:39.626 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.626 TEST_HEADER include/spdk/nbd.h 00:02:39.626 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.626 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.626 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.626 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.626 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.626 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.626 TEST_HEADER include/spdk/nvmf.h 00:02:39.626 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.626 TEST_HEADER include/spdk/opal.h 00:02:39.626 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.626 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.626 TEST_HEADER include/spdk/pci_ids.h 00:02:39.626 TEST_HEADER include/spdk/opal_spec.h 00:02:39.626 TEST_HEADER include/spdk/pipe.h 00:02:39.626 CC app/nvmf_tgt/nvmf_main.o 00:02:39.626 TEST_HEADER include/spdk/reduce.h 00:02:39.626 TEST_HEADER include/spdk/queue.h 00:02:39.626 TEST_HEADER include/spdk/scheduler.h 00:02:39.626 TEST_HEADER include/spdk/rpc.h 00:02:39.626 TEST_HEADER include/spdk/scsi.h 00:02:39.626 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.626 TEST_HEADER include/spdk/sock.h 00:02:39.626 TEST_HEADER include/spdk/thread.h 00:02:39.626 TEST_HEADER include/spdk/stdinc.h 00:02:39.626 TEST_HEADER include/spdk/string.h 00:02:39.626 TEST_HEADER include/spdk/trace_parser.h 00:02:39.626 TEST_HEADER include/spdk/trace.h 00:02:39.626 TEST_HEADER include/spdk/util.h 00:02:39.626 TEST_HEADER include/spdk/tree.h 00:02:39.626 TEST_HEADER include/spdk/ublk.h 00:02:39.626 TEST_HEADER include/spdk/version.h 00:02:39.626 TEST_HEADER include/spdk/uuid.h 00:02:39.626 TEST_HEADER include/spdk/vhost.h 00:02:39.626 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.626 TEST_HEADER include/spdk/vmd.h 00:02:39.626 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.626 TEST_HEADER include/spdk/xor.h 00:02:39.626 TEST_HEADER include/spdk/zipf.h 00:02:39.626 CXX test/cpp_headers/accel_module.o 00:02:39.626 CXX test/cpp_headers/accel.o 00:02:39.626 CC app/spdk_tgt/spdk_tgt.o 00:02:39.626 CXX test/cpp_headers/base64.o 00:02:39.626 CXX test/cpp_headers/assert.o 00:02:39.626 CXX test/cpp_headers/bdev.o 00:02:39.626 CXX test/cpp_headers/barrier.o 00:02:39.626 CXX test/cpp_headers/bdev_module.o 00:02:39.626 CXX test/cpp_headers/bit_array.o 00:02:39.626 CXX test/cpp_headers/bdev_zone.o 00:02:39.626 CXX test/cpp_headers/bit_pool.o 00:02:39.626 CXX test/cpp_headers/blob.o 00:02:39.626 CXX test/cpp_headers/blobfs.o 00:02:39.626 CXX test/cpp_headers/blob_bdev.o 00:02:39.626 CXX test/cpp_headers/conf.o 00:02:39.626 CC examples/util/zipf/zipf.o 00:02:39.626 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.626 CXX test/cpp_headers/cpuset.o 00:02:39.626 CXX test/cpp_headers/crc16.o 00:02:39.626 CXX test/cpp_headers/config.o 00:02:39.626 CXX test/cpp_headers/crc32.o 00:02:39.626 CXX test/cpp_headers/crc64.o 00:02:39.626 CXX test/cpp_headers/dma.o 00:02:39.626 CXX test/cpp_headers/env_dpdk.o 00:02:39.626 CXX test/cpp_headers/dif.o 00:02:39.626 CXX test/cpp_headers/event.o 00:02:39.626 CXX test/cpp_headers/env.o 00:02:39.626 CXX test/cpp_headers/endian.o 00:02:39.626 CXX test/cpp_headers/fd_group.o 00:02:39.626 CXX test/cpp_headers/fd.o 00:02:39.626 CXX test/cpp_headers/file.o 00:02:39.626 CXX test/cpp_headers/fsdev.o 00:02:39.626 CXX test/cpp_headers/fsdev_module.o 00:02:39.626 CXX test/cpp_headers/ftl.o 00:02:39.626 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.626 CC test/app/stub/stub.o 00:02:39.626 CXX test/cpp_headers/hexlify.o 00:02:39.626 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.626 CC test/thread/lock/spdk_lock.o 00:02:39.626 CXX test/cpp_headers/histogram_data.o 00:02:39.626 CXX test/cpp_headers/gpt_spec.o 00:02:39.626 CC test/env/pci/pci_ut.o 00:02:39.626 CXX test/cpp_headers/idxd.o 00:02:39.626 CXX test/cpp_headers/idxd_spec.o 00:02:39.626 CXX test/cpp_headers/ioat.o 00:02:39.626 CXX test/cpp_headers/init.o 00:02:39.626 CXX test/cpp_headers/iscsi_spec.o 00:02:39.626 CXX test/cpp_headers/ioat_spec.o 00:02:39.626 CXX test/cpp_headers/json.o 00:02:39.626 CC examples/ioat/verify/verify.o 00:02:39.626 CXX test/cpp_headers/jsonrpc.o 00:02:39.626 CXX test/cpp_headers/keyring.o 00:02:39.626 CC test/thread/poller_perf/poller_perf.o 00:02:39.626 CC test/app/histogram_perf/histogram_perf.o 00:02:39.626 CXX test/cpp_headers/likely.o 00:02:39.626 CXX test/cpp_headers/keyring_module.o 00:02:39.626 CXX test/cpp_headers/log.o 00:02:39.626 CXX test/cpp_headers/md5.o 00:02:39.626 CXX test/cpp_headers/lvol.o 00:02:39.626 CXX test/cpp_headers/memory.o 00:02:39.626 CXX test/cpp_headers/mmio.o 00:02:39.626 CC test/app/jsoncat/jsoncat.o 00:02:39.626 CXX test/cpp_headers/nbd.o 00:02:39.626 CXX test/cpp_headers/net.o 00:02:39.626 CC test/env/memory/memory_ut.o 00:02:39.626 CXX test/cpp_headers/nvme.o 00:02:39.626 CXX test/cpp_headers/notify.o 00:02:39.626 CXX test/cpp_headers/nvme_intel.o 00:02:39.626 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.626 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.626 CXX test/cpp_headers/nvme_spec.o 00:02:39.626 CXX test/cpp_headers/nvme_zns.o 00:02:39.626 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.626 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.626 CC app/fio/nvme/fio_plugin.o 00:02:39.626 CXX test/cpp_headers/nvmf.o 00:02:39.626 CXX test/cpp_headers/nvmf_spec.o 00:02:39.626 CXX test/cpp_headers/nvmf_transport.o 00:02:39.626 CXX test/cpp_headers/opal.o 00:02:39.626 LINK spdk_lspci 00:02:39.626 CXX test/cpp_headers/opal_spec.o 00:02:39.626 CXX test/cpp_headers/pci_ids.o 00:02:39.626 CXX test/cpp_headers/queue.o 00:02:39.626 CXX test/cpp_headers/reduce.o 00:02:39.626 CXX test/cpp_headers/pipe.o 00:02:39.626 CXX test/cpp_headers/rpc.o 00:02:39.626 CXX test/cpp_headers/scheduler.o 00:02:39.626 CXX test/cpp_headers/scsi.o 00:02:39.626 CC examples/ioat/perf/perf.o 00:02:39.626 CXX test/cpp_headers/scsi_spec.o 00:02:39.626 CC test/env/vtophys/vtophys.o 00:02:39.887 CXX test/cpp_headers/sock.o 00:02:39.887 CXX test/cpp_headers/stdinc.o 00:02:39.887 CC test/app/bdev_svc/bdev_svc.o 00:02:39.887 CC test/dma/test_dma/test_dma.o 00:02:39.887 LINK rpc_client_test 00:02:39.887 CXX test/cpp_headers/string.o 00:02:39.887 CXX test/cpp_headers/thread.o 00:02:39.887 CC app/fio/bdev/fio_plugin.o 00:02:39.887 LINK spdk_nvme_discover 00:02:39.887 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.887 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.887 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.887 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.887 LINK spdk_trace_record 00:02:39.887 LINK interrupt_tgt 00:02:39.887 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:39.887 LINK zipf 00:02:39.887 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:39.887 CXX test/cpp_headers/trace.o 00:02:39.887 CXX test/cpp_headers/trace_parser.o 00:02:39.887 LINK histogram_perf 00:02:39.887 CXX test/cpp_headers/tree.o 00:02:39.887 LINK nvmf_tgt 00:02:39.887 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.887 CXX test/cpp_headers/ublk.o 00:02:39.888 CXX test/cpp_headers/util.o 00:02:39.888 CXX test/cpp_headers/uuid.o 00:02:39.888 CXX test/cpp_headers/version.o 00:02:39.888 LINK jsoncat 00:02:39.888 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.888 LINK env_dpdk_post_init 00:02:39.888 LINK poller_perf 00:02:39.888 LINK iscsi_tgt 00:02:39.888 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.888 CXX test/cpp_headers/vhost.o 00:02:39.888 CXX test/cpp_headers/vmd.o 00:02:39.888 CXX test/cpp_headers/xor.o 00:02:39.888 CXX test/cpp_headers/zipf.o 00:02:39.888 LINK vtophys 00:02:39.888 LINK stub 00:02:39.888 LINK verify 00:02:39.888 LINK spdk_tgt 00:02:39.888 LINK bdev_svc 00:02:40.146 LINK ioat_perf 00:02:40.146 LINK spdk_trace 00:02:40.146 LINK pci_ut 00:02:40.146 LINK spdk_dd 00:02:40.146 LINK llvm_vfio_fuzz 00:02:40.146 LINK test_dma 00:02:40.146 LINK vhost_fuzz 00:02:40.146 LINK nvme_fuzz 00:02:40.146 LINK spdk_nvme_identify 00:02:40.405 LINK spdk_nvme_perf 00:02:40.405 LINK spdk_nvme 00:02:40.405 LINK mem_callbacks 00:02:40.405 LINK spdk_bdev 00:02:40.405 LINK spdk_top 00:02:40.405 LINK llvm_nvme_fuzz 00:02:40.405 CC examples/idxd/perf/perf.o 00:02:40.405 CC app/vhost/vhost.o 00:02:40.405 CC examples/sock/hello_world/hello_sock.o 00:02:40.405 CC examples/vmd/led/led.o 00:02:40.664 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.664 CC examples/thread/thread/thread_ex.o 00:02:40.664 LINK led 00:02:40.664 LINK lsvmd 00:02:40.664 LINK memory_ut 00:02:40.664 LINK vhost 00:02:40.664 LINK hello_sock 00:02:40.664 LINK idxd_perf 00:02:40.664 LINK thread 00:02:40.924 LINK spdk_lock 00:02:40.924 LINK iscsi_fuzz 00:02:41.492 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.492 CC examples/nvme/reconnect/reconnect.o 00:02:41.492 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.492 CC examples/nvme/arbitration/arbitration.o 00:02:41.492 CC examples/nvme/hello_world/hello_world.o 00:02:41.492 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.492 CC examples/nvme/hotplug/hotplug.o 00:02:41.492 CC examples/nvme/abort/abort.o 00:02:41.492 CC test/event/event_perf/event_perf.o 00:02:41.492 CC test/event/reactor_perf/reactor_perf.o 00:02:41.492 CC test/event/reactor/reactor.o 00:02:41.492 CC test/event/app_repeat/app_repeat.o 00:02:41.492 LINK pmr_persistence 00:02:41.492 CC test/event/scheduler/scheduler.o 00:02:41.492 LINK cmb_copy 00:02:41.492 LINK hello_world 00:02:41.492 LINK hotplug 00:02:41.492 LINK reactor 00:02:41.750 LINK reactor_perf 00:02:41.750 LINK reconnect 00:02:41.750 LINK event_perf 00:02:41.750 LINK arbitration 00:02:41.750 LINK abort 00:02:41.750 LINK app_repeat 00:02:41.750 LINK nvme_manage 00:02:41.750 LINK scheduler 00:02:41.750 CC test/nvme/sgl/sgl.o 00:02:41.750 CC test/nvme/fdp/fdp.o 00:02:41.750 CC test/nvme/e2edp/nvme_dp.o 00:02:41.750 CC test/nvme/reset/reset.o 00:02:41.750 CC test/nvme/startup/startup.o 00:02:41.750 CC test/nvme/aer/aer.o 00:02:41.750 CC test/nvme/simple_copy/simple_copy.o 00:02:41.750 CC test/nvme/boot_partition/boot_partition.o 00:02:41.750 CC test/nvme/connect_stress/connect_stress.o 00:02:41.750 CC test/nvme/err_injection/err_injection.o 00:02:41.750 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.750 CC test/nvme/compliance/nvme_compliance.o 00:02:41.750 CC test/nvme/reserve/reserve.o 00:02:41.750 CC test/nvme/cuse/cuse.o 00:02:41.750 CC test/nvme/overhead/overhead.o 00:02:41.750 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.750 CC test/accel/dif/dif.o 00:02:41.750 CC test/blobfs/mkfs/mkfs.o 00:02:42.009 CC test/lvol/esnap/esnap.o 00:02:42.009 LINK err_injection 00:02:42.010 LINK boot_partition 00:02:42.010 LINK doorbell_aers 00:02:42.010 LINK connect_stress 00:02:42.010 LINK startup 00:02:42.010 LINK fused_ordering 00:02:42.010 LINK reserve 00:02:42.010 LINK simple_copy 00:02:42.010 LINK sgl 00:02:42.010 LINK fdp 00:02:42.010 LINK nvme_dp 00:02:42.010 LINK reset 00:02:42.010 LINK aer 00:02:42.010 LINK overhead 00:02:42.010 LINK mkfs 00:02:42.010 LINK nvme_compliance 00:02:42.269 LINK dif 00:02:42.269 CC examples/accel/perf/accel_perf.o 00:02:42.529 CC examples/blob/hello_world/hello_blob.o 00:02:42.529 CC examples/blob/cli/blobcli.o 00:02:42.529 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.529 LINK hello_blob 00:02:42.529 LINK hello_fsdev 00:02:42.789 LINK accel_perf 00:02:42.789 LINK cuse 00:02:42.789 LINK blobcli 00:02:43.358 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.358 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.617 LINK hello_bdev 00:02:43.877 CC test/bdev/bdevio/bdevio.o 00:02:43.877 LINK bdevperf 00:02:44.137 LINK bdevio 00:02:45.517 LINK esnap 00:02:45.517 CC examples/nvmf/nvmf/nvmf.o 00:02:45.517 LINK nvmf 00:02:46.900 00:02:46.900 real 0m46.644s 00:02:46.900 user 6m19.342s 00:02:46.900 sys 2m29.522s 00:02:46.900 13:09:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.900 13:09:48 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.900 ************************************ 00:02:46.900 END TEST make 00:02:46.900 ************************************ 00:02:46.900 13:09:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.900 13:09:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.900 13:09:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.900 13:09:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.900 13:09:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.900 13:09:48 -- pm/common@44 -- $ pid=167499 00:02:46.900 13:09:48 -- pm/common@50 -- $ kill -TERM 167499 00:02:46.900 13:09:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.900 13:09:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.900 13:09:48 -- pm/common@44 -- $ pid=167501 00:02:46.900 13:09:48 -- pm/common@50 -- $ kill -TERM 167501 00:02:46.900 13:09:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.900 13:09:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.900 13:09:48 -- pm/common@44 -- $ pid=167503 00:02:46.900 13:09:48 -- pm/common@50 -- $ kill -TERM 167503 00:02:46.900 13:09:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.900 13:09:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.900 13:09:48 -- pm/common@44 -- $ pid=167525 00:02:46.900 13:09:48 -- pm/common@50 -- $ sudo -E kill -TERM 167525 00:02:46.900 13:09:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:46.900 13:09:48 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:46.900 13:09:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:46.900 13:09:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:46.900 13:09:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:46.900 13:09:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:46.900 13:09:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.900 13:09:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.900 13:09:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.900 13:09:49 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.900 13:09:49 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.900 13:09:49 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.900 13:09:49 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.900 13:09:49 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.900 13:09:49 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.900 13:09:49 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.900 13:09:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.900 13:09:49 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.900 13:09:49 -- scripts/common.sh@345 -- # : 1 00:02:46.900 13:09:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.900 13:09:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.900 13:09:49 -- scripts/common.sh@365 -- # decimal 1 00:02:46.900 13:09:49 -- scripts/common.sh@353 -- # local d=1 00:02:46.900 13:09:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.900 13:09:49 -- scripts/common.sh@355 -- # echo 1 00:02:46.900 13:09:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.900 13:09:49 -- scripts/common.sh@366 -- # decimal 2 00:02:46.900 13:09:49 -- scripts/common.sh@353 -- # local d=2 00:02:46.900 13:09:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.900 13:09:49 -- scripts/common.sh@355 -- # echo 2 00:02:46.900 13:09:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.900 13:09:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.900 13:09:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.900 13:09:49 -- scripts/common.sh@368 -- # return 0 00:02:46.900 13:09:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.900 13:09:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.900 --rc genhtml_branch_coverage=1 00:02:46.900 --rc genhtml_function_coverage=1 00:02:46.900 --rc genhtml_legend=1 00:02:46.900 --rc geninfo_all_blocks=1 00:02:46.900 --rc geninfo_unexecuted_blocks=1 00:02:46.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:46.900 ' 00:02:46.900 13:09:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.900 --rc genhtml_branch_coverage=1 00:02:46.900 --rc genhtml_function_coverage=1 00:02:46.900 --rc genhtml_legend=1 00:02:46.900 --rc geninfo_all_blocks=1 00:02:46.900 --rc geninfo_unexecuted_blocks=1 00:02:46.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:46.900 ' 00:02:46.900 13:09:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.900 --rc genhtml_branch_coverage=1 00:02:46.900 --rc genhtml_function_coverage=1 00:02:46.900 --rc genhtml_legend=1 00:02:46.900 --rc geninfo_all_blocks=1 00:02:46.900 --rc geninfo_unexecuted_blocks=1 00:02:46.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:46.900 ' 00:02:46.900 13:09:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.900 --rc genhtml_branch_coverage=1 00:02:46.900 --rc genhtml_function_coverage=1 00:02:46.900 --rc genhtml_legend=1 00:02:46.900 --rc geninfo_all_blocks=1 00:02:46.900 --rc geninfo_unexecuted_blocks=1 00:02:46.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:46.900 ' 00:02:46.900 13:09:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.900 13:09:49 -- nvmf/common.sh@7 -- # uname -s 00:02:46.900 13:09:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.900 13:09:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.900 13:09:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.900 13:09:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.900 13:09:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.900 13:09:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.900 13:09:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.900 13:09:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.900 13:09:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.161 13:09:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.161 13:09:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:47.161 13:09:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:47.161 13:09:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.161 13:09:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.161 13:09:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:47.161 13:09:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.161 13:09:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:47.161 13:09:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.161 13:09:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.161 13:09:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.161 13:09:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.161 13:09:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.161 13:09:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.161 13:09:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.161 13:09:49 -- paths/export.sh@5 -- # export PATH 00:02:47.161 13:09:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.161 13:09:49 -- nvmf/common.sh@51 -- # : 0 00:02:47.161 13:09:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.161 13:09:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.161 13:09:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.161 13:09:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.161 13:09:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.161 13:09:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.161 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.161 13:09:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.161 13:09:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.161 13:09:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.161 13:09:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.161 13:09:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.161 13:09:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.161 13:09:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.161 13:09:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.161 13:09:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.161 13:09:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.161 13:09:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.161 13:09:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.161 13:09:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.161 13:09:49 -- spdk/autotest.sh@48 -- # udevadm_pid=231386 00:02:47.161 13:09:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.161 13:09:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.161 13:09:49 -- pm/common@17 -- # local monitor 00:02:47.161 13:09:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.161 13:09:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.161 13:09:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.161 13:09:49 -- pm/common@21 -- # date +%s 00:02:47.161 13:09:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.161 13:09:49 -- pm/common@21 -- # date +%s 00:02:47.161 13:09:49 -- pm/common@25 -- # sleep 1 00:02:47.161 13:09:49 -- pm/common@21 -- # date +%s 00:02:47.161 13:09:49 -- pm/common@21 -- # date +%s 00:02:47.161 13:09:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733746189 00:02:47.161 13:09:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733746189 00:02:47.161 13:09:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733746189 00:02:47.161 13:09:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733746189 00:02:47.161 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733746189_collect-cpu-load.pm.log 00:02:47.161 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733746189_collect-vmstat.pm.log 00:02:47.161 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733746189_collect-cpu-temp.pm.log 00:02:47.161 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733746189_collect-bmc-pm.bmc.pm.log 00:02:48.101 13:09:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.102 13:09:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.102 13:09:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:48.102 13:09:50 -- common/autotest_common.sh@10 -- # set +x 00:02:48.102 13:09:50 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.102 13:09:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:48.102 13:09:50 -- common/autotest_common.sh@10 -- # set +x 00:02:48.102 13:09:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:48.102 13:09:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.102 13:09:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.102 13:09:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:48.102 13:09:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.102 13:09:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.102 13:09:50 -- common/autotest_common.sh@1457 -- # uname 00:02:48.102 13:09:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:48.102 13:09:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.102 13:09:50 -- common/autotest_common.sh@1477 -- # uname 00:02:48.102 13:09:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:48.102 13:09:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.102 13:09:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:48.361 lcov: LCOV version 1.15 00:02:48.361 13:09:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:56.491 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:04.310 13:10:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:04.310 13:10:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:04.310 13:10:05 -- common/autotest_common.sh@10 -- # set +x 00:03:04.310 13:10:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:04.310 13:10:05 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.606 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:07.606 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:07.606 13:10:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:07.606 13:10:09 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:07.606 13:10:09 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:07.606 13:10:09 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:07.606 13:10:09 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:07.606 13:10:09 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:07.606 13:10:09 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:07.606 13:10:09 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:07.606 13:10:09 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:07.606 13:10:09 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:07.606 13:10:09 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:07.606 13:10:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.606 13:10:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:07.606 13:10:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:07.606 13:10:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.606 13:10:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:07.606 13:10:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:07.606 13:10:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:07.606 13:10:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.606 No valid GPT data, bailing 00:03:07.866 13:10:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.866 13:10:09 -- scripts/common.sh@394 -- # pt= 00:03:07.866 13:10:09 -- scripts/common.sh@395 -- # return 1 00:03:07.866 13:10:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.866 1+0 records in 00:03:07.866 1+0 records out 00:03:07.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533258 s, 197 MB/s 00:03:07.866 13:10:09 -- spdk/autotest.sh@105 -- # sync 00:03:07.866 13:10:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.866 13:10:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.866 13:10:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:15.997 13:10:17 -- spdk/autotest.sh@111 -- # uname -s 00:03:15.997 13:10:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:15.997 13:10:17 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:15.997 13:10:17 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.997 13:10:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.997 13:10:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.997 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:03:15.997 ************************************ 00:03:15.997 START TEST setup.sh 00:03:15.997 ************************************ 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.997 * Looking for test storage... 00:03:15.997 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.997 13:10:17 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:15.997 13:10:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:15.997 13:10:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.997 13:10:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.997 ************************************ 00:03:15.997 START TEST acl 00:03:15.997 ************************************ 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.997 * Looking for test storage... 00:03:15.997 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.997 13:10:17 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.997 --rc genhtml_branch_coverage=1 00:03:15.997 --rc genhtml_function_coverage=1 00:03:15.997 --rc genhtml_legend=1 00:03:15.997 --rc geninfo_all_blocks=1 00:03:15.997 --rc geninfo_unexecuted_blocks=1 00:03:15.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.997 ' 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.997 13:10:17 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:15.997 13:10:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:15.997 13:10:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.997 13:10:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.195 13:10:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:20.195 13:10:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:20.195 13:10:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.195 13:10:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:20.195 13:10:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.195 13:10:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:22.736 Hugepages 00:03:22.736 node hugesize free / total 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.736 13:10:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 00:03:22.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.995 13:10:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:22.995 13:10:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:22.995 13:10:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.995 13:10:25 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.995 13:10:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:23.254 ************************************ 00:03:23.254 START TEST denied 00:03:23.254 ************************************ 00:03:23.254 13:10:25 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:23.254 13:10:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:23.254 13:10:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:23.254 13:10:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:23.254 13:10:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.254 13:10:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:27.452 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.452 13:10:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.652 00:03:31.652 real 0m8.448s 00:03:31.652 user 0m2.564s 00:03:31.652 sys 0m5.211s 00:03:31.652 13:10:33 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.652 13:10:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.652 ************************************ 00:03:31.652 END TEST denied 00:03:31.652 ************************************ 00:03:31.652 13:10:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.652 13:10:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.652 13:10:33 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.652 13:10:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.652 ************************************ 00:03:31.652 START TEST allowed 00:03:31.652 ************************************ 00:03:31.652 13:10:33 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:31.652 13:10:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:31.652 13:10:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.652 13:10:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:31.652 13:10:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.652 13:10:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.938 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.938 13:10:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.938 13:10:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.938 13:10:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.938 13:10:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.938 13:10:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.142 00:03:41.142 real 0m9.079s 00:03:41.142 user 0m2.556s 00:03:41.142 sys 0m5.098s 00:03:41.142 13:10:42 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.142 13:10:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:41.142 ************************************ 00:03:41.142 END TEST allowed 00:03:41.142 ************************************ 00:03:41.142 00:03:41.142 real 0m25.338s 00:03:41.142 user 0m7.982s 00:03:41.142 sys 0m15.557s 00:03:41.142 13:10:42 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.142 13:10:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.142 ************************************ 00:03:41.142 END TEST acl 00:03:41.142 ************************************ 00:03:41.142 13:10:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:41.142 13:10:42 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.142 13:10:42 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.142 13:10:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.142 ************************************ 00:03:41.142 START TEST hugepages 00:03:41.142 ************************************ 00:03:41.142 13:10:42 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:41.142 * Looking for test storage... 00:03:41.142 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.142 13:10:43 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.142 --rc genhtml_branch_coverage=1 00:03:41.142 --rc genhtml_function_coverage=1 00:03:41.142 --rc genhtml_legend=1 00:03:41.142 --rc geninfo_all_blocks=1 00:03:41.142 --rc geninfo_unexecuted_blocks=1 00:03:41.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:41.142 ' 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.142 --rc genhtml_branch_coverage=1 00:03:41.142 --rc genhtml_function_coverage=1 00:03:41.142 --rc genhtml_legend=1 00:03:41.142 --rc geninfo_all_blocks=1 00:03:41.142 --rc geninfo_unexecuted_blocks=1 00:03:41.142 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:41.142 ' 00:03:41.142 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.143 --rc genhtml_branch_coverage=1 00:03:41.143 --rc genhtml_function_coverage=1 00:03:41.143 --rc genhtml_legend=1 00:03:41.143 --rc geninfo_all_blocks=1 00:03:41.143 --rc geninfo_unexecuted_blocks=1 00:03:41.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:41.143 ' 00:03:41.143 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:41.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.143 --rc genhtml_branch_coverage=1 00:03:41.143 --rc genhtml_function_coverage=1 00:03:41.143 --rc genhtml_legend=1 00:03:41.143 --rc geninfo_all_blocks=1 00:03:41.143 --rc geninfo_unexecuted_blocks=1 00:03:41.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:41.143 ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 47723456 kB' 'MemAvailable: 47963072 kB' 'Buffers: 1076 kB' 'Cached: 9042008 kB' 'SwapCached: 0 kB' 'Active: 9786684 kB' 'Inactive: 256660 kB' 'Active(anon): 9396772 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1003624 kB' 'Mapped: 145896 kB' 'Shmem: 8396512 kB' 'KReclaimable: 210780 kB' 'Slab: 846420 kB' 'SReclaimable: 210780 kB' 'SUnreclaim: 635640 kB' 'KernelStack: 21760 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39013824 kB' 'Committed_AS: 10962800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214228 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.143 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.144 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:41.145 13:10:43 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:41.145 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.145 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.145 13:10:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.145 ************************************ 00:03:41.145 START TEST single_node_setup 00:03:41.145 ************************************ 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.145 13:10:43 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:44.441 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.441 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.441 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.441 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.618 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49935592 kB' 'MemAvailable: 50174580 kB' 'Buffers: 1076 kB' 'Cached: 9042160 kB' 'SwapCached: 0 kB' 'Active: 9790244 kB' 'Inactive: 256660 kB' 'Active(anon): 9400332 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007072 kB' 'Mapped: 146068 kB' 'Shmem: 8396664 kB' 'KReclaimable: 209524 kB' 'Slab: 844792 kB' 'SReclaimable: 209524 kB' 'SUnreclaim: 635268 kB' 'KernelStack: 22000 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10963824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214212 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.618 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49936228 kB' 'MemAvailable: 50175216 kB' 'Buffers: 1076 kB' 'Cached: 9042160 kB' 'SwapCached: 0 kB' 'Active: 9789060 kB' 'Inactive: 256660 kB' 'Active(anon): 9399148 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1005796 kB' 'Mapped: 145948 kB' 'Shmem: 8396664 kB' 'KReclaimable: 209524 kB' 'Slab: 844828 kB' 'SReclaimable: 209524 kB' 'SUnreclaim: 635304 kB' 'KernelStack: 21824 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10962344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214132 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49936048 kB' 'MemAvailable: 50175036 kB' 'Buffers: 1076 kB' 'Cached: 9042184 kB' 'SwapCached: 0 kB' 'Active: 9789556 kB' 'Inactive: 256660 kB' 'Active(anon): 9399644 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006260 kB' 'Mapped: 145948 kB' 'Shmem: 8396688 kB' 'KReclaimable: 209524 kB' 'Slab: 844828 kB' 'SReclaimable: 209524 kB' 'SUnreclaim: 635304 kB' 'KernelStack: 21856 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10963868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214196 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:46.624 nr_hugepages=1024 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:46.624 resv_hugepages=0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:46.624 surplus_hugepages=0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:46.624 anon_hugepages=0 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49934536 kB' 'MemAvailable: 50173524 kB' 'Buffers: 1076 kB' 'Cached: 9042184 kB' 'SwapCached: 0 kB' 'Active: 9790284 kB' 'Inactive: 256660 kB' 'Active(anon): 9400372 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006988 kB' 'Mapped: 145948 kB' 'Shmem: 8396688 kB' 'KReclaimable: 209524 kB' 'Slab: 844828 kB' 'SReclaimable: 209524 kB' 'SUnreclaim: 635304 kB' 'KernelStack: 21872 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10963888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214276 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 22526592 kB' 'MemUsed: 10058776 kB' 'SwapCached: 0 kB' 'Active: 5905776 kB' 'Inactive: 96472 kB' 'Active(anon): 5674944 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343416 kB' 'Mapped: 52936 kB' 'AnonPages: 662076 kB' 'Shmem: 5016112 kB' 'KernelStack: 12136 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130196 kB' 'Slab: 467120 kB' 'SReclaimable: 130196 kB' 'SUnreclaim: 336924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:46.628 node0=1024 expecting 1024 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.628 00:03:46.628 real 0m5.399s 00:03:46.628 user 0m1.473s 00:03:46.628 sys 0m2.435s 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.628 13:10:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:46.628 ************************************ 00:03:46.628 END TEST single_node_setup 00:03:46.628 ************************************ 00:03:46.628 13:10:48 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:46.628 13:10:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.628 13:10:48 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.628 13:10:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.628 ************************************ 00:03:46.628 START TEST even_2G_alloc 00:03:46.628 ************************************ 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.628 13:10:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:49.925 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.925 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:50.190 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49931860 kB' 'MemAvailable: 50170860 kB' 'Buffers: 1076 kB' 'Cached: 9042324 kB' 'SwapCached: 0 kB' 'Active: 9790544 kB' 'Inactive: 256660 kB' 'Active(anon): 9400632 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007108 kB' 'Mapped: 145008 kB' 'Shmem: 8396828 kB' 'KReclaimable: 209548 kB' 'Slab: 844936 kB' 'SReclaimable: 209548 kB' 'SUnreclaim: 635388 kB' 'KernelStack: 21744 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10954712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214244 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49932744 kB' 'MemAvailable: 50171736 kB' 'Buffers: 1076 kB' 'Cached: 9042328 kB' 'SwapCached: 0 kB' 'Active: 9789872 kB' 'Inactive: 256660 kB' 'Active(anon): 9399960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006452 kB' 'Mapped: 144928 kB' 'Shmem: 8396832 kB' 'KReclaimable: 209532 kB' 'Slab: 844932 kB' 'SReclaimable: 209532 kB' 'SUnreclaim: 635400 kB' 'KernelStack: 21728 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10954732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214212 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49931988 kB' 'MemAvailable: 50170980 kB' 'Buffers: 1076 kB' 'Cached: 9042344 kB' 'SwapCached: 0 kB' 'Active: 9789872 kB' 'Inactive: 256660 kB' 'Active(anon): 9399960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006452 kB' 'Mapped: 144928 kB' 'Shmem: 8396848 kB' 'KReclaimable: 209532 kB' 'Slab: 844932 kB' 'SReclaimable: 209532 kB' 'SUnreclaim: 635400 kB' 'KernelStack: 21728 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10954752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214212 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:50.197 nr_hugepages=1024 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:50.197 resv_hugepages=0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:50.197 surplus_hugepages=0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:50.197 anon_hugepages=0 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49931988 kB' 'MemAvailable: 50170980 kB' 'Buffers: 1076 kB' 'Cached: 9042344 kB' 'SwapCached: 0 kB' 'Active: 9789872 kB' 'Inactive: 256660 kB' 'Active(anon): 9399960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006452 kB' 'Mapped: 144928 kB' 'Shmem: 8396848 kB' 'KReclaimable: 209532 kB' 'Slab: 844932 kB' 'SReclaimable: 209532 kB' 'SUnreclaim: 635400 kB' 'KernelStack: 21728 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10954776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214212 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 23564236 kB' 'MemUsed: 9021132 kB' 'SwapCached: 0 kB' 'Active: 5906720 kB' 'Inactive: 96472 kB' 'Active(anon): 5675888 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343580 kB' 'Mapped: 52560 kB' 'AnonPages: 662760 kB' 'Shmem: 5016276 kB' 'KernelStack: 12152 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130140 kB' 'Slab: 467124 kB' 'SReclaimable: 130140 kB' 'SUnreclaim: 336984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.201 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32859376 kB' 'MemFree: 26365232 kB' 'MemUsed: 6494144 kB' 'SwapCached: 0 kB' 'Active: 3882868 kB' 'Inactive: 160188 kB' 'Active(anon): 3723788 kB' 'Inactive(anon): 0 kB' 'Active(file): 159080 kB' 'Inactive(file): 160188 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3699900 kB' 'Mapped: 92872 kB' 'AnonPages: 343296 kB' 'Shmem: 3380632 kB' 'KernelStack: 9560 kB' 'PageTables: 2468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79392 kB' 'Slab: 377808 kB' 'SReclaimable: 79392 kB' 'SUnreclaim: 298416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.462 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.463 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:50.464 node0=512 expecting 512 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:50.464 node1=512 expecting 512 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:50.464 00:03:50.464 real 0m3.684s 00:03:50.464 user 0m1.408s 00:03:50.464 sys 0m2.342s 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.464 13:10:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.464 ************************************ 00:03:50.464 END TEST even_2G_alloc 00:03:50.464 ************************************ 00:03:50.464 13:10:52 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:50.464 13:10:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.464 13:10:52 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.464 13:10:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.464 ************************************ 00:03:50.464 START TEST odd_alloc 00:03:50.464 ************************************ 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.464 13:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:53.761 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.761 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.033 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:54.033 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:54.033 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:54.033 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49945804 kB' 'MemAvailable: 50184780 kB' 'Buffers: 1076 kB' 'Cached: 9042492 kB' 'SwapCached: 0 kB' 'Active: 9791048 kB' 'Inactive: 256660 kB' 'Active(anon): 9401136 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007296 kB' 'Mapped: 144984 kB' 'Shmem: 8396996 kB' 'KReclaimable: 209500 kB' 'Slab: 845176 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635676 kB' 'KernelStack: 21696 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40061376 kB' 'Committed_AS: 10955392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214260 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.034 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.035 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49947408 kB' 'MemAvailable: 50186384 kB' 'Buffers: 1076 kB' 'Cached: 9042496 kB' 'SwapCached: 0 kB' 'Active: 9790844 kB' 'Inactive: 256660 kB' 'Active(anon): 9400932 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007148 kB' 'Mapped: 144936 kB' 'Shmem: 8397000 kB' 'KReclaimable: 209500 kB' 'Slab: 845196 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635696 kB' 'KernelStack: 21728 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40061376 kB' 'Committed_AS: 10955408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214228 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.036 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.037 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49946852 kB' 'MemAvailable: 50185828 kB' 'Buffers: 1076 kB' 'Cached: 9042512 kB' 'SwapCached: 0 kB' 'Active: 9790864 kB' 'Inactive: 256660 kB' 'Active(anon): 9400952 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007152 kB' 'Mapped: 144936 kB' 'Shmem: 8397016 kB' 'KReclaimable: 209500 kB' 'Slab: 845196 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635696 kB' 'KernelStack: 21728 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40061376 kB' 'Committed_AS: 10955432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214228 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.038 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.039 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:54.040 nr_hugepages=1025 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:54.040 resv_hugepages=0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:54.040 surplus_hugepages=0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:54.040 anon_hugepages=0 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49947480 kB' 'MemAvailable: 50186456 kB' 'Buffers: 1076 kB' 'Cached: 9042532 kB' 'SwapCached: 0 kB' 'Active: 9791136 kB' 'Inactive: 256660 kB' 'Active(anon): 9401224 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007432 kB' 'Mapped: 144936 kB' 'Shmem: 8397036 kB' 'KReclaimable: 209500 kB' 'Slab: 845196 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635696 kB' 'KernelStack: 21728 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40061376 kB' 'Committed_AS: 10955452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214212 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.040 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.041 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 23577472 kB' 'MemUsed: 9007896 kB' 'SwapCached: 0 kB' 'Active: 5907300 kB' 'Inactive: 96472 kB' 'Active(anon): 5676468 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343628 kB' 'Mapped: 52568 kB' 'AnonPages: 663268 kB' 'Shmem: 5016324 kB' 'KernelStack: 12152 kB' 'PageTables: 5664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130108 kB' 'Slab: 467356 kB' 'SReclaimable: 130108 kB' 'SUnreclaim: 337248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.042 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:54.043 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32859376 kB' 'MemFree: 26372116 kB' 'MemUsed: 6487260 kB' 'SwapCached: 0 kB' 'Active: 3884748 kB' 'Inactive: 160188 kB' 'Active(anon): 3725668 kB' 'Inactive(anon): 0 kB' 'Active(file): 159080 kB' 'Inactive(file): 160188 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3700020 kB' 'Mapped: 92368 kB' 'AnonPages: 345000 kB' 'Shmem: 3380752 kB' 'KernelStack: 9560 kB' 'PageTables: 2540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79392 kB' 'Slab: 377840 kB' 'SReclaimable: 79392 kB' 'SUnreclaim: 298448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.044 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:54.045 node0=513 expecting 513 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:54.045 node1=512 expecting 512 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:54.045 00:03:54.045 real 0m3.666s 00:03:54.045 user 0m1.395s 00:03:54.045 sys 0m2.334s 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.045 13:10:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.045 ************************************ 00:03:54.045 END TEST odd_alloc 00:03:54.045 ************************************ 00:03:54.045 13:10:56 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:54.045 13:10:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.045 13:10:56 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.045 13:10:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.306 ************************************ 00:03:54.306 START TEST custom_alloc 00:03:54.306 ************************************ 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.306 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.307 13:10:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.605 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.605 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 48929788 kB' 'MemAvailable: 49168764 kB' 'Buffers: 1076 kB' 'Cached: 9042676 kB' 'SwapCached: 0 kB' 'Active: 9792548 kB' 'Inactive: 256660 kB' 'Active(anon): 9402636 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008680 kB' 'Mapped: 145008 kB' 'Shmem: 8397180 kB' 'KReclaimable: 209500 kB' 'Slab: 845060 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635560 kB' 'KernelStack: 21808 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39538112 kB' 'Committed_AS: 10958992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214468 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.872 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 48930380 kB' 'MemAvailable: 49169356 kB' 'Buffers: 1076 kB' 'Cached: 9042680 kB' 'SwapCached: 0 kB' 'Active: 9792328 kB' 'Inactive: 256660 kB' 'Active(anon): 9402416 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008484 kB' 'Mapped: 144952 kB' 'Shmem: 8397184 kB' 'KReclaimable: 209500 kB' 'Slab: 845108 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635608 kB' 'KernelStack: 21776 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39538112 kB' 'Committed_AS: 10959000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214404 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.873 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.874 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 48930908 kB' 'MemAvailable: 49169884 kB' 'Buffers: 1076 kB' 'Cached: 9042696 kB' 'SwapCached: 0 kB' 'Active: 9792208 kB' 'Inactive: 256660 kB' 'Active(anon): 9402296 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008332 kB' 'Mapped: 144968 kB' 'Shmem: 8397200 kB' 'KReclaimable: 209500 kB' 'Slab: 845188 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635688 kB' 'KernelStack: 21792 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39538112 kB' 'Committed_AS: 10959024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214356 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.875 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:57.877 nr_hugepages=1536 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:57.877 resv_hugepages=0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:57.877 surplus_hugepages=0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:57.877 anon_hugepages=0 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 48928504 kB' 'MemAvailable: 49167480 kB' 'Buffers: 1076 kB' 'Cached: 9042716 kB' 'SwapCached: 0 kB' 'Active: 9792532 kB' 'Inactive: 256660 kB' 'Active(anon): 9402620 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008104 kB' 'Mapped: 144968 kB' 'Shmem: 8397220 kB' 'KReclaimable: 209500 kB' 'Slab: 845188 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635688 kB' 'KernelStack: 21888 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39538112 kB' 'Committed_AS: 10959044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214372 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.877 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 23588916 kB' 'MemUsed: 8996452 kB' 'SwapCached: 0 kB' 'Active: 5908480 kB' 'Inactive: 96472 kB' 'Active(anon): 5677648 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343680 kB' 'Mapped: 52600 kB' 'AnonPages: 664368 kB' 'Shmem: 5016376 kB' 'KernelStack: 12168 kB' 'PageTables: 5976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130108 kB' 'Slab: 467168 kB' 'SReclaimable: 130108 kB' 'SUnreclaim: 337060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:10:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.879 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32859376 kB' 'MemFree: 25341440 kB' 'MemUsed: 7517936 kB' 'SwapCached: 0 kB' 'Active: 3883820 kB' 'Inactive: 160188 kB' 'Active(anon): 3724740 kB' 'Inactive(anon): 0 kB' 'Active(file): 159080 kB' 'Inactive(file): 160188 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3700152 kB' 'Mapped: 92368 kB' 'AnonPages: 344100 kB' 'Shmem: 3380884 kB' 'KernelStack: 9560 kB' 'PageTables: 2460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79392 kB' 'Slab: 378020 kB' 'SReclaimable: 79392 kB' 'SUnreclaim: 298628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.880 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.881 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:57.882 node0=512 expecting 512 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:57.882 node1=1024 expecting 1024 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:57.882 00:03:57.882 real 0m3.757s 00:03:57.882 user 0m1.439s 00:03:57.882 sys 0m2.379s 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.882 13:11:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.882 ************************************ 00:03:57.882 END TEST custom_alloc 00:03:57.882 ************************************ 00:03:57.882 13:11:00 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.882 13:11:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.882 13:11:00 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.882 13:11:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.142 ************************************ 00:03:58.142 START TEST no_shrink_alloc 00:03:58.142 ************************************ 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.142 13:11:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:01.440 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.440 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49951996 kB' 'MemAvailable: 50190972 kB' 'Buffers: 1076 kB' 'Cached: 9042848 kB' 'SwapCached: 0 kB' 'Active: 9791832 kB' 'Inactive: 256660 kB' 'Active(anon): 9401920 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007316 kB' 'Mapped: 145076 kB' 'Shmem: 8397352 kB' 'KReclaimable: 209500 kB' 'Slab: 845316 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635816 kB' 'KernelStack: 21760 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214420 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.440 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.441 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49952768 kB' 'MemAvailable: 50191744 kB' 'Buffers: 1076 kB' 'Cached: 9042852 kB' 'SwapCached: 0 kB' 'Active: 9790856 kB' 'Inactive: 256660 kB' 'Active(anon): 9400944 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006840 kB' 'Mapped: 144988 kB' 'Shmem: 8397356 kB' 'KReclaimable: 209500 kB' 'Slab: 845312 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635812 kB' 'KernelStack: 21744 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214388 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.442 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.443 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49953768 kB' 'MemAvailable: 50192744 kB' 'Buffers: 1076 kB' 'Cached: 9042872 kB' 'SwapCached: 0 kB' 'Active: 9790872 kB' 'Inactive: 256660 kB' 'Active(anon): 9400960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006840 kB' 'Mapped: 144988 kB' 'Shmem: 8397376 kB' 'KReclaimable: 209500 kB' 'Slab: 845312 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635812 kB' 'KernelStack: 21744 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214388 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.708 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.709 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:01.710 nr_hugepages=1024 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:01.710 resv_hugepages=0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:01.710 surplus_hugepages=0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:01.710 anon_hugepages=0 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49955432 kB' 'MemAvailable: 50194408 kB' 'Buffers: 1076 kB' 'Cached: 9042912 kB' 'SwapCached: 0 kB' 'Active: 9790568 kB' 'Inactive: 256660 kB' 'Active(anon): 9400656 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1006440 kB' 'Mapped: 144988 kB' 'Shmem: 8397416 kB' 'KReclaimable: 209500 kB' 'Slab: 845312 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 635812 kB' 'KernelStack: 21728 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214388 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.710 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.711 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 22536764 kB' 'MemUsed: 10048604 kB' 'SwapCached: 0 kB' 'Active: 5907504 kB' 'Inactive: 96472 kB' 'Active(anon): 5676672 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343812 kB' 'Mapped: 52588 kB' 'AnonPages: 663300 kB' 'Shmem: 5016508 kB' 'KernelStack: 12168 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130108 kB' 'Slab: 467244 kB' 'SReclaimable: 130108 kB' 'SUnreclaim: 337136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.712 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:01.713 node0=1024 expecting 1024 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.713 13:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:05.039 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.039 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.039 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49929604 kB' 'MemAvailable: 50168580 kB' 'Buffers: 1076 kB' 'Cached: 9043000 kB' 'SwapCached: 0 kB' 'Active: 9791884 kB' 'Inactive: 256660 kB' 'Active(anon): 9401972 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1007612 kB' 'Mapped: 145040 kB' 'Shmem: 8397504 kB' 'KReclaimable: 209500 kB' 'Slab: 845668 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 636168 kB' 'KernelStack: 21712 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214340 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.039 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.040 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49930452 kB' 'MemAvailable: 50169428 kB' 'Buffers: 1076 kB' 'Cached: 9043004 kB' 'SwapCached: 0 kB' 'Active: 9792204 kB' 'Inactive: 256660 kB' 'Active(anon): 9402292 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008012 kB' 'Mapped: 144964 kB' 'Shmem: 8397508 kB' 'KReclaimable: 209500 kB' 'Slab: 845676 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 636176 kB' 'KernelStack: 21744 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10957728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214324 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49930704 kB' 'MemAvailable: 50169680 kB' 'Buffers: 1076 kB' 'Cached: 9043024 kB' 'SwapCached: 0 kB' 'Active: 9792420 kB' 'Inactive: 256660 kB' 'Active(anon): 9402508 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008212 kB' 'Mapped: 144964 kB' 'Shmem: 8397528 kB' 'KReclaimable: 209500 kB' 'Slab: 845676 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 636176 kB' 'KernelStack: 21744 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10960380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214308 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:05.309 nr_hugepages=1024 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:05.309 resv_hugepages=0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:05.309 surplus_hugepages=0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:05.309 anon_hugepages=0 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65444744 kB' 'MemFree: 49931580 kB' 'MemAvailable: 50170556 kB' 'Buffers: 1076 kB' 'Cached: 9043024 kB' 'SwapCached: 0 kB' 'Active: 9792488 kB' 'Inactive: 256660 kB' 'Active(anon): 9402576 kB' 'Inactive(anon): 0 kB' 'Active(file): 389912 kB' 'Inactive(file): 256660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1008336 kB' 'Mapped: 144964 kB' 'Shmem: 8397528 kB' 'KReclaimable: 209500 kB' 'Slab: 845676 kB' 'SReclaimable: 209500 kB' 'SUnreclaim: 636176 kB' 'KernelStack: 21712 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40062400 kB' 'Committed_AS: 10960644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214340 kB' 'VmallocChunk: 0 kB' 'Percpu: 73920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 570740 kB' 'DirectMap2M: 11698176 kB' 'DirectMap1G: 56623104 kB' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 22498260 kB' 'MemUsed: 10087108 kB' 'SwapCached: 0 kB' 'Active: 5908736 kB' 'Inactive: 96472 kB' 'Active(anon): 5677904 kB' 'Inactive(anon): 0 kB' 'Active(file): 230832 kB' 'Inactive(file): 96472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5343928 kB' 'Mapped: 52596 kB' 'AnonPages: 664428 kB' 'Shmem: 5016624 kB' 'KernelStack: 12152 kB' 'PageTables: 5664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130108 kB' 'Slab: 467392 kB' 'SReclaimable: 130108 kB' 'SUnreclaim: 337284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:05.313 node0=1024 expecting 1024 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.313 00:04:05.313 real 0m7.277s 00:04:05.313 user 0m2.773s 00:04:05.313 sys 0m4.637s 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.313 13:11:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.313 ************************************ 00:04:05.313 END TEST no_shrink_alloc 00:04:05.313 ************************************ 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:05.313 13:11:07 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:05.313 00:04:05.313 real 0m24.474s 00:04:05.313 user 0m8.786s 00:04:05.313 sys 0m14.573s 00:04:05.313 13:11:07 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.313 13:11:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.313 ************************************ 00:04:05.313 END TEST hugepages 00:04:05.313 ************************************ 00:04:05.313 13:11:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:05.313 13:11:07 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.313 13:11:07 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.313 13:11:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.573 ************************************ 00:04:05.573 START TEST driver 00:04:05.573 ************************************ 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:05.573 * Looking for test storage... 00:04:05.573 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.573 13:11:07 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.573 --rc genhtml_branch_coverage=1 00:04:05.573 --rc genhtml_function_coverage=1 00:04:05.573 --rc genhtml_legend=1 00:04:05.573 --rc geninfo_all_blocks=1 00:04:05.573 --rc geninfo_unexecuted_blocks=1 00:04:05.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.573 ' 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.573 --rc genhtml_branch_coverage=1 00:04:05.573 --rc genhtml_function_coverage=1 00:04:05.573 --rc genhtml_legend=1 00:04:05.573 --rc geninfo_all_blocks=1 00:04:05.573 --rc geninfo_unexecuted_blocks=1 00:04:05.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.573 ' 00:04:05.573 13:11:07 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.573 --rc genhtml_branch_coverage=1 00:04:05.574 --rc genhtml_function_coverage=1 00:04:05.574 --rc genhtml_legend=1 00:04:05.574 --rc geninfo_all_blocks=1 00:04:05.574 --rc geninfo_unexecuted_blocks=1 00:04:05.574 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.574 ' 00:04:05.574 13:11:07 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.574 --rc genhtml_branch_coverage=1 00:04:05.574 --rc genhtml_function_coverage=1 00:04:05.574 --rc genhtml_legend=1 00:04:05.574 --rc geninfo_all_blocks=1 00:04:05.574 --rc geninfo_unexecuted_blocks=1 00:04:05.574 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.574 ' 00:04:05.574 13:11:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.574 13:11:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.574 13:11:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.862 13:11:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.862 13:11:12 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.862 13:11:12 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.862 13:11:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.862 ************************************ 00:04:10.863 START TEST guess_driver 00:04:10.863 ************************************ 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:10.863 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:10.863 Looking for driver=vfio-pci 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.863 13:11:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.162 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.163 13:11:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.073 13:11:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.073 13:11:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.073 13:11:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.073 13:11:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.073 13:11:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.073 13:11:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.073 13:11:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.353 00:04:21.353 real 0m10.252s 00:04:21.353 user 0m2.725s 00:04:21.353 sys 0m5.133s 00:04:21.353 13:11:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.353 13:11:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.353 ************************************ 00:04:21.353 END TEST guess_driver 00:04:21.353 ************************************ 00:04:21.353 00:04:21.353 real 0m15.485s 00:04:21.353 user 0m4.202s 00:04:21.353 sys 0m8.060s 00:04:21.353 13:11:23 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.353 13:11:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.353 ************************************ 00:04:21.353 END TEST driver 00:04:21.353 ************************************ 00:04:21.353 13:11:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:21.353 13:11:23 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.353 13:11:23 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.353 13:11:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.353 ************************************ 00:04:21.353 START TEST devices 00:04:21.353 ************************************ 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:21.353 * Looking for test storage... 00:04:21.353 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.353 13:11:23 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.353 --rc genhtml_branch_coverage=1 00:04:21.353 --rc genhtml_function_coverage=1 00:04:21.353 --rc genhtml_legend=1 00:04:21.353 --rc geninfo_all_blocks=1 00:04:21.353 --rc geninfo_unexecuted_blocks=1 00:04:21.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.353 ' 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.353 --rc genhtml_branch_coverage=1 00:04:21.353 --rc genhtml_function_coverage=1 00:04:21.353 --rc genhtml_legend=1 00:04:21.353 --rc geninfo_all_blocks=1 00:04:21.353 --rc geninfo_unexecuted_blocks=1 00:04:21.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.353 ' 00:04:21.353 13:11:23 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.353 --rc genhtml_branch_coverage=1 00:04:21.353 --rc genhtml_function_coverage=1 00:04:21.354 --rc genhtml_legend=1 00:04:21.354 --rc geninfo_all_blocks=1 00:04:21.354 --rc geninfo_unexecuted_blocks=1 00:04:21.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.354 ' 00:04:21.354 13:11:23 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.354 --rc genhtml_branch_coverage=1 00:04:21.354 --rc genhtml_function_coverage=1 00:04:21.354 --rc genhtml_legend=1 00:04:21.354 --rc geninfo_all_blocks=1 00:04:21.354 --rc geninfo_unexecuted_blocks=1 00:04:21.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.354 ' 00:04:21.354 13:11:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.354 13:11:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.354 13:11:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.354 13:11:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:25.553 13:11:27 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.553 No valid GPT data, bailing 00:04:25.553 13:11:27 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:25.553 13:11:27 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.553 13:11:27 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.553 13:11:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.553 ************************************ 00:04:25.553 START TEST nvme_mount 00:04:25.553 ************************************ 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.553 13:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.123 Creating new GPT entries in memory. 00:04:26.123 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.123 other utilities. 00:04:26.123 13:11:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.123 13:11:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.123 13:11:28 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.123 13:11:28 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.123 13:11:28 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.064 Creating new GPT entries in memory. 00:04:27.064 The operation has completed successfully. 00:04:27.064 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.064 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.064 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 263730 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.323 13:11:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.672 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.672 13:11:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.931 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.931 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.931 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.931 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.931 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.931 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.931 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.931 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.931 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.191 13:11:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.487 13:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.781 13:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.041 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.041 00:04:38.041 real 0m12.857s 00:04:38.041 user 0m3.698s 00:04:38.041 sys 0m7.064s 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.041 13:11:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.041 ************************************ 00:04:38.041 END TEST nvme_mount 00:04:38.041 ************************************ 00:04:38.041 13:11:40 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.041 13:11:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.041 13:11:40 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.041 13:11:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.041 ************************************ 00:04:38.041 START TEST dm_mount 00:04:38.041 ************************************ 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.041 13:11:40 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.982 Creating new GPT entries in memory. 00:04:38.982 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.982 other utilities. 00:04:38.982 13:11:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.982 13:11:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.982 13:11:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.982 13:11:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.982 13:11:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.363 Creating new GPT entries in memory. 00:04:40.364 The operation has completed successfully. 00:04:40.364 13:11:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.364 13:11:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.364 13:11:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.364 13:11:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.364 13:11:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.304 The operation has completed successfully. 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 268174 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.304 13:11:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.598 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.599 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.859 13:11:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.155 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.155 00:04:48.155 real 0m10.158s 00:04:48.155 user 0m2.500s 00:04:48.155 sys 0m4.752s 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.155 13:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.155 ************************************ 00:04:48.155 END TEST dm_mount 00:04:48.155 ************************************ 00:04:48.155 13:11:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.155 13:11:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.155 13:11:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.415 13:11:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.415 13:11:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.415 13:11:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.415 13:11:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.675 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.675 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.675 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.675 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.675 13:11:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.675 00:04:48.675 real 0m27.561s 00:04:48.675 user 0m7.808s 00:04:48.675 sys 0m14.687s 00:04:48.675 13:11:50 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.675 13:11:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.675 ************************************ 00:04:48.675 END TEST devices 00:04:48.675 ************************************ 00:04:48.675 00:04:48.675 real 1m33.405s 00:04:48.675 user 0m29.012s 00:04:48.675 sys 0m53.232s 00:04:48.675 13:11:50 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.675 13:11:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.675 ************************************ 00:04:48.675 END TEST setup.sh 00:04:48.675 ************************************ 00:04:48.675 13:11:50 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:51.971 Hugepages 00:04:51.971 node hugesize free / total 00:04:51.971 node0 1048576kB 0 / 0 00:04:51.971 node0 2048kB 1024 / 1024 00:04:51.971 node1 1048576kB 0 / 0 00:04:51.971 node1 2048kB 1024 / 1024 00:04:51.971 00:04:51.971 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.971 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:51.971 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:52.231 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:52.231 13:11:54 -- spdk/autotest.sh@117 -- # uname -s 00:04:52.231 13:11:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:52.231 13:11:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:52.231 13:11:54 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:55.526 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.526 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.786 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.696 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.696 13:11:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:58.636 13:12:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:58.636 13:12:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:58.636 13:12:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.636 13:12:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:58.636 13:12:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.636 13:12:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.636 13:12:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.636 13:12:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.636 13:12:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.636 13:12:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:58.636 13:12:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:58.636 13:12:00 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.932 Waiting for block devices as requested 00:05:01.932 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:01.932 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.192 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.192 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.192 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.451 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.451 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.451 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.710 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.710 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.710 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.971 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.971 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.971 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.234 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.234 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.234 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:03.492 13:12:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:03.492 13:12:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:03.492 13:12:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:03.493 13:12:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:03.493 13:12:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:03.493 13:12:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:03.493 13:12:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:03.493 13:12:05 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:03.493 13:12:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:03.493 13:12:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:03.493 13:12:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:03.493 13:12:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:03.493 13:12:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.493 13:12:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:03.493 13:12:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:03.493 13:12:05 -- common/autotest_common.sh@1543 -- # continue 00:05:03.493 13:12:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:03.493 13:12:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.493 13:12:05 -- common/autotest_common.sh@10 -- # set +x 00:05:03.493 13:12:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:03.493 13:12:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.493 13:12:05 -- common/autotest_common.sh@10 -- # set +x 00:05:03.493 13:12:05 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:07.691 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.691 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.631 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.631 13:12:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:08.631 13:12:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.631 13:12:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.890 13:12:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:08.890 13:12:10 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:08.890 13:12:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.890 13:12:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:08.890 13:12:10 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:08.890 13:12:10 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:08.890 13:12:10 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:08.890 13:12:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:08.890 13:12:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.890 13:12:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.890 13:12:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.890 13:12:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.890 13:12:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.890 13:12:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:08.890 13:12:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:08.890 13:12:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.890 13:12:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:08.890 13:12:11 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:08.890 13:12:11 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:08.890 13:12:11 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:08.890 13:12:11 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:08.890 13:12:11 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:08.890 13:12:11 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:08.890 13:12:11 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=278762 00:05:08.890 13:12:11 -- common/autotest_common.sh@1585 -- # waitforlisten 278762 00:05:08.890 13:12:11 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.890 13:12:11 -- common/autotest_common.sh@835 -- # '[' -z 278762 ']' 00:05:08.890 13:12:11 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.890 13:12:11 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.890 13:12:11 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.890 13:12:11 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.890 13:12:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.890 [2024-12-09 13:12:11.067533] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:08.890 [2024-12-09 13:12:11.067614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278762 ] 00:05:09.150 [2024-12-09 13:12:11.153657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.150 [2024-12-09 13:12:11.196801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.409 13:12:11 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.409 13:12:11 -- common/autotest_common.sh@868 -- # return 0 00:05:09.409 13:12:11 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:09.409 13:12:11 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:09.409 13:12:11 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:12.700 nvme0n1 00:05:12.700 13:12:14 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:12.700 [2024-12-09 13:12:14.594766] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:12.700 request: 00:05:12.700 { 00:05:12.700 "nvme_ctrlr_name": "nvme0", 00:05:12.700 "password": "test", 00:05:12.700 "method": "bdev_nvme_opal_revert", 00:05:12.700 "req_id": 1 00:05:12.700 } 00:05:12.700 Got JSON-RPC error response 00:05:12.700 response: 00:05:12.700 { 00:05:12.700 "code": -32602, 00:05:12.700 "message": "Invalid parameters" 00:05:12.700 } 00:05:12.700 13:12:14 -- common/autotest_common.sh@1591 -- # true 00:05:12.700 13:12:14 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:12.700 13:12:14 -- common/autotest_common.sh@1595 -- # killprocess 278762 00:05:12.700 13:12:14 -- common/autotest_common.sh@954 -- # '[' -z 278762 ']' 00:05:12.700 13:12:14 -- common/autotest_common.sh@958 -- # kill -0 278762 00:05:12.700 13:12:14 -- common/autotest_common.sh@959 -- # uname 00:05:12.700 13:12:14 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.700 13:12:14 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278762 00:05:12.700 13:12:14 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.700 13:12:14 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.700 13:12:14 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278762' 00:05:12.700 killing process with pid 278762 00:05:12.700 13:12:14 -- common/autotest_common.sh@973 -- # kill 278762 00:05:12.700 13:12:14 -- common/autotest_common.sh@978 -- # wait 278762 00:05:15.237 13:12:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:15.237 13:12:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:15.237 13:12:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.237 13:12:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.237 13:12:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:15.237 13:12:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.237 13:12:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.237 13:12:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:15.237 13:12:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:15.237 13:12:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.237 13:12:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.237 13:12:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.237 ************************************ 00:05:15.237 START TEST env 00:05:15.237 ************************************ 00:05:15.237 13:12:16 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:15.237 * Looking for test storage... 00:05:15.237 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.237 13:12:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.237 13:12:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.237 13:12:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.237 13:12:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.237 13:12:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.237 13:12:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.237 13:12:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.237 13:12:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.237 13:12:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.237 13:12:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.237 13:12:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.237 13:12:17 env -- scripts/common.sh@344 -- # case "$op" in 00:05:15.237 13:12:17 env -- scripts/common.sh@345 -- # : 1 00:05:15.237 13:12:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.237 13:12:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.237 13:12:17 env -- scripts/common.sh@365 -- # decimal 1 00:05:15.237 13:12:17 env -- scripts/common.sh@353 -- # local d=1 00:05:15.237 13:12:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.237 13:12:17 env -- scripts/common.sh@355 -- # echo 1 00:05:15.237 13:12:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.237 13:12:17 env -- scripts/common.sh@366 -- # decimal 2 00:05:15.237 13:12:17 env -- scripts/common.sh@353 -- # local d=2 00:05:15.237 13:12:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.237 13:12:17 env -- scripts/common.sh@355 -- # echo 2 00:05:15.237 13:12:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.237 13:12:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.237 13:12:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.237 13:12:17 env -- scripts/common.sh@368 -- # return 0 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.237 13:12:17 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.237 --rc genhtml_branch_coverage=1 00:05:15.237 --rc genhtml_function_coverage=1 00:05:15.237 --rc genhtml_legend=1 00:05:15.238 --rc geninfo_all_blocks=1 00:05:15.238 --rc geninfo_unexecuted_blocks=1 00:05:15.238 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:15.238 ' 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.238 --rc genhtml_branch_coverage=1 00:05:15.238 --rc genhtml_function_coverage=1 00:05:15.238 --rc genhtml_legend=1 00:05:15.238 --rc geninfo_all_blocks=1 00:05:15.238 --rc geninfo_unexecuted_blocks=1 00:05:15.238 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:15.238 ' 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.238 --rc genhtml_branch_coverage=1 00:05:15.238 --rc genhtml_function_coverage=1 00:05:15.238 --rc genhtml_legend=1 00:05:15.238 --rc geninfo_all_blocks=1 00:05:15.238 --rc geninfo_unexecuted_blocks=1 00:05:15.238 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:15.238 ' 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.238 --rc genhtml_branch_coverage=1 00:05:15.238 --rc genhtml_function_coverage=1 00:05:15.238 --rc genhtml_legend=1 00:05:15.238 --rc geninfo_all_blocks=1 00:05:15.238 --rc geninfo_unexecuted_blocks=1 00:05:15.238 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:15.238 ' 00:05:15.238 13:12:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.238 13:12:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 START TEST env_memory 00:05:15.238 ************************************ 00:05:15.238 13:12:17 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:15.238 00:05:15.238 00:05:15.238 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.238 http://cunit.sourceforge.net/ 00:05:15.238 00:05:15.238 00:05:15.238 Suite: memory 00:05:15.238 Test: alloc and free memory map ...[2024-12-09 13:12:17.176205] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:15.238 passed 00:05:15.238 Test: mem map translation ...[2024-12-09 13:12:17.189094] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:15.238 [2024-12-09 13:12:17.189109] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:15.238 [2024-12-09 13:12:17.189142] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:15.238 [2024-12-09 13:12:17.189151] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:15.238 passed 00:05:15.238 Test: mem map registration ...[2024-12-09 13:12:17.210511] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:15.238 [2024-12-09 13:12:17.210527] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:15.238 passed 00:05:15.238 Test: mem map adjacent registrations ...passed 00:05:15.238 00:05:15.238 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.238 suites 1 1 n/a 0 0 00:05:15.238 tests 4 4 4 0 0 00:05:15.238 asserts 152 152 152 0 n/a 00:05:15.238 00:05:15.238 Elapsed time = 0.085 seconds 00:05:15.238 00:05:15.238 real 0m0.098s 00:05:15.238 user 0m0.085s 00:05:15.238 sys 0m0.013s 00:05:15.238 13:12:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.238 13:12:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 END TEST env_memory 00:05:15.238 ************************************ 00:05:15.238 13:12:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.238 13:12:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.238 13:12:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 START TEST env_vtophys 00:05:15.238 ************************************ 00:05:15.238 13:12:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:15.238 EAL: lib.eal log level changed from notice to debug 00:05:15.238 EAL: Detected lcore 0 as core 0 on socket 0 00:05:15.238 EAL: Detected lcore 1 as core 1 on socket 0 00:05:15.238 EAL: Detected lcore 2 as core 2 on socket 0 00:05:15.238 EAL: Detected lcore 3 as core 3 on socket 0 00:05:15.238 EAL: Detected lcore 4 as core 4 on socket 0 00:05:15.238 EAL: Detected lcore 5 as core 5 on socket 0 00:05:15.238 EAL: Detected lcore 6 as core 6 on socket 0 00:05:15.238 EAL: Detected lcore 7 as core 8 on socket 0 00:05:15.238 EAL: Detected lcore 8 as core 9 on socket 0 00:05:15.238 EAL: Detected lcore 9 as core 10 on socket 0 00:05:15.238 EAL: Detected lcore 10 as core 11 on socket 0 00:05:15.238 EAL: Detected lcore 11 as core 12 on socket 0 00:05:15.238 EAL: Detected lcore 12 as core 13 on socket 0 00:05:15.238 EAL: Detected lcore 13 as core 14 on socket 0 00:05:15.238 EAL: Detected lcore 14 as core 16 on socket 0 00:05:15.238 EAL: Detected lcore 15 as core 17 on socket 0 00:05:15.238 EAL: Detected lcore 16 as core 18 on socket 0 00:05:15.238 EAL: Detected lcore 17 as core 19 on socket 0 00:05:15.238 EAL: Detected lcore 18 as core 20 on socket 0 00:05:15.238 EAL: Detected lcore 19 as core 21 on socket 0 00:05:15.238 EAL: Detected lcore 20 as core 22 on socket 0 00:05:15.238 EAL: Detected lcore 21 as core 24 on socket 0 00:05:15.238 EAL: Detected lcore 22 as core 25 on socket 0 00:05:15.238 EAL: Detected lcore 23 as core 26 on socket 0 00:05:15.238 EAL: Detected lcore 24 as core 27 on socket 0 00:05:15.238 EAL: Detected lcore 25 as core 28 on socket 0 00:05:15.238 EAL: Detected lcore 26 as core 29 on socket 0 00:05:15.238 EAL: Detected lcore 27 as core 30 on socket 0 00:05:15.238 EAL: Detected lcore 28 as core 0 on socket 1 00:05:15.238 EAL: Detected lcore 29 as core 1 on socket 1 00:05:15.238 EAL: Detected lcore 30 as core 2 on socket 1 00:05:15.238 EAL: Detected lcore 31 as core 3 on socket 1 00:05:15.238 EAL: Detected lcore 32 as core 4 on socket 1 00:05:15.238 EAL: Detected lcore 33 as core 5 on socket 1 00:05:15.238 EAL: Detected lcore 34 as core 6 on socket 1 00:05:15.238 EAL: Detected lcore 35 as core 8 on socket 1 00:05:15.238 EAL: Detected lcore 36 as core 9 on socket 1 00:05:15.238 EAL: Detected lcore 37 as core 10 on socket 1 00:05:15.238 EAL: Detected lcore 38 as core 11 on socket 1 00:05:15.238 EAL: Detected lcore 39 as core 12 on socket 1 00:05:15.238 EAL: Detected lcore 40 as core 13 on socket 1 00:05:15.238 EAL: Detected lcore 41 as core 14 on socket 1 00:05:15.238 EAL: Detected lcore 42 as core 16 on socket 1 00:05:15.238 EAL: Detected lcore 43 as core 17 on socket 1 00:05:15.238 EAL: Detected lcore 44 as core 18 on socket 1 00:05:15.238 EAL: Detected lcore 45 as core 19 on socket 1 00:05:15.238 EAL: Detected lcore 46 as core 20 on socket 1 00:05:15.238 EAL: Detected lcore 47 as core 21 on socket 1 00:05:15.238 EAL: Detected lcore 48 as core 22 on socket 1 00:05:15.238 EAL: Detected lcore 49 as core 24 on socket 1 00:05:15.238 EAL: Detected lcore 50 as core 25 on socket 1 00:05:15.238 EAL: Detected lcore 51 as core 26 on socket 1 00:05:15.238 EAL: Detected lcore 52 as core 27 on socket 1 00:05:15.238 EAL: Detected lcore 53 as core 28 on socket 1 00:05:15.238 EAL: Detected lcore 54 as core 29 on socket 1 00:05:15.238 EAL: Detected lcore 55 as core 30 on socket 1 00:05:15.238 EAL: Detected lcore 56 as core 0 on socket 0 00:05:15.238 EAL: Detected lcore 57 as core 1 on socket 0 00:05:15.238 EAL: Detected lcore 58 as core 2 on socket 0 00:05:15.238 EAL: Detected lcore 59 as core 3 on socket 0 00:05:15.238 EAL: Detected lcore 60 as core 4 on socket 0 00:05:15.238 EAL: Detected lcore 61 as core 5 on socket 0 00:05:15.238 EAL: Detected lcore 62 as core 6 on socket 0 00:05:15.238 EAL: Detected lcore 63 as core 8 on socket 0 00:05:15.238 EAL: Detected lcore 64 as core 9 on socket 0 00:05:15.238 EAL: Detected lcore 65 as core 10 on socket 0 00:05:15.238 EAL: Detected lcore 66 as core 11 on socket 0 00:05:15.238 EAL: Detected lcore 67 as core 12 on socket 0 00:05:15.238 EAL: Detected lcore 68 as core 13 on socket 0 00:05:15.238 EAL: Detected lcore 69 as core 14 on socket 0 00:05:15.238 EAL: Detected lcore 70 as core 16 on socket 0 00:05:15.238 EAL: Detected lcore 71 as core 17 on socket 0 00:05:15.238 EAL: Detected lcore 72 as core 18 on socket 0 00:05:15.238 EAL: Detected lcore 73 as core 19 on socket 0 00:05:15.238 EAL: Detected lcore 74 as core 20 on socket 0 00:05:15.238 EAL: Detected lcore 75 as core 21 on socket 0 00:05:15.238 EAL: Detected lcore 76 as core 22 on socket 0 00:05:15.238 EAL: Detected lcore 77 as core 24 on socket 0 00:05:15.238 EAL: Detected lcore 78 as core 25 on socket 0 00:05:15.238 EAL: Detected lcore 79 as core 26 on socket 0 00:05:15.238 EAL: Detected lcore 80 as core 27 on socket 0 00:05:15.238 EAL: Detected lcore 81 as core 28 on socket 0 00:05:15.238 EAL: Detected lcore 82 as core 29 on socket 0 00:05:15.238 EAL: Detected lcore 83 as core 30 on socket 0 00:05:15.238 EAL: Detected lcore 84 as core 0 on socket 1 00:05:15.238 EAL: Detected lcore 85 as core 1 on socket 1 00:05:15.238 EAL: Detected lcore 86 as core 2 on socket 1 00:05:15.238 EAL: Detected lcore 87 as core 3 on socket 1 00:05:15.238 EAL: Detected lcore 88 as core 4 on socket 1 00:05:15.238 EAL: Detected lcore 89 as core 5 on socket 1 00:05:15.238 EAL: Detected lcore 90 as core 6 on socket 1 00:05:15.238 EAL: Detected lcore 91 as core 8 on socket 1 00:05:15.238 EAL: Detected lcore 92 as core 9 on socket 1 00:05:15.238 EAL: Detected lcore 93 as core 10 on socket 1 00:05:15.238 EAL: Detected lcore 94 as core 11 on socket 1 00:05:15.238 EAL: Detected lcore 95 as core 12 on socket 1 00:05:15.238 EAL: Detected lcore 96 as core 13 on socket 1 00:05:15.238 EAL: Detected lcore 97 as core 14 on socket 1 00:05:15.238 EAL: Detected lcore 98 as core 16 on socket 1 00:05:15.238 EAL: Detected lcore 99 as core 17 on socket 1 00:05:15.238 EAL: Detected lcore 100 as core 18 on socket 1 00:05:15.238 EAL: Detected lcore 101 as core 19 on socket 1 00:05:15.238 EAL: Detected lcore 102 as core 20 on socket 1 00:05:15.238 EAL: Detected lcore 103 as core 21 on socket 1 00:05:15.238 EAL: Detected lcore 104 as core 22 on socket 1 00:05:15.239 EAL: Detected lcore 105 as core 24 on socket 1 00:05:15.239 EAL: Detected lcore 106 as core 25 on socket 1 00:05:15.239 EAL: Detected lcore 107 as core 26 on socket 1 00:05:15.239 EAL: Detected lcore 108 as core 27 on socket 1 00:05:15.239 EAL: Detected lcore 109 as core 28 on socket 1 00:05:15.239 EAL: Detected lcore 110 as core 29 on socket 1 00:05:15.239 EAL: Detected lcore 111 as core 30 on socket 1 00:05:15.239 EAL: Maximum logical cores by configuration: 128 00:05:15.239 EAL: Detected CPU lcores: 112 00:05:15.239 EAL: Detected NUMA nodes: 2 00:05:15.239 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:15.239 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:15.239 EAL: Checking presence of .so 'librte_eal.so' 00:05:15.239 EAL: Detected static linkage of DPDK 00:05:15.239 EAL: No shared files mode enabled, IPC will be disabled 00:05:15.239 EAL: Bus pci wants IOVA as 'DC' 00:05:15.239 EAL: Buses did not request a specific IOVA mode. 00:05:15.239 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:15.239 EAL: Selected IOVA mode 'VA' 00:05:15.239 EAL: Probing VFIO support... 00:05:15.239 EAL: IOMMU type 1 (Type 1) is supported 00:05:15.239 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:15.239 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:15.239 EAL: VFIO support initialized 00:05:15.239 EAL: Ask a virtual area of 0x2e000 bytes 00:05:15.239 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:15.239 EAL: Setting up physically contiguous memory... 00:05:15.239 EAL: Setting maximum number of open files to 524288 00:05:15.239 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:15.239 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:15.239 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:15.239 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:15.239 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.239 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:15.239 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.239 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.239 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:15.239 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:15.239 EAL: Hugepages will be freed exactly as allocated. 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: TSC frequency is ~2500000 KHz 00:05:15.239 EAL: Main lcore 0 is ready (tid=7fa6351b4a00;cpuset=[0]) 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 0 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 2MB 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Mem event callback 'spdk:(nil)' registered 00:05:15.239 00:05:15.239 00:05:15.239 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.239 http://cunit.sourceforge.net/ 00:05:15.239 00:05:15.239 00:05:15.239 Suite: components_suite 00:05:15.239 Test: vtophys_malloc_test ...passed 00:05:15.239 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 4MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was shrunk by 4MB 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 6MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was shrunk by 6MB 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 10MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was shrunk by 10MB 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 18MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was shrunk by 18MB 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 34MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was shrunk by 34MB 00:05:15.239 EAL: Trying to obtain current memory policy. 00:05:15.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.239 EAL: Restoring previous memory policy: 4 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.239 EAL: request: mp_malloc_sync 00:05:15.239 EAL: No shared files mode enabled, IPC is disabled 00:05:15.239 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.499 EAL: request: mp_malloc_sync 00:05:15.499 EAL: No shared files mode enabled, IPC is disabled 00:05:15.499 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.499 EAL: Trying to obtain current memory policy. 00:05:15.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.499 EAL: Restoring previous memory policy: 4 00:05:15.499 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.499 EAL: request: mp_malloc_sync 00:05:15.499 EAL: No shared files mode enabled, IPC is disabled 00:05:15.499 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.499 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.499 EAL: request: mp_malloc_sync 00:05:15.499 EAL: No shared files mode enabled, IPC is disabled 00:05:15.499 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.499 EAL: Trying to obtain current memory policy. 00:05:15.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.499 EAL: Restoring previous memory policy: 4 00:05:15.499 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.499 EAL: request: mp_malloc_sync 00:05:15.499 EAL: No shared files mode enabled, IPC is disabled 00:05:15.499 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.499 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.499 EAL: request: mp_malloc_sync 00:05:15.499 EAL: No shared files mode enabled, IPC is disabled 00:05:15.499 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.499 EAL: Trying to obtain current memory policy. 00:05:15.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.758 EAL: Restoring previous memory policy: 4 00:05:15.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.758 EAL: request: mp_malloc_sync 00:05:15.758 EAL: No shared files mode enabled, IPC is disabled 00:05:15.758 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.759 EAL: request: mp_malloc_sync 00:05:15.759 EAL: No shared files mode enabled, IPC is disabled 00:05:15.759 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.759 EAL: Trying to obtain current memory policy. 00:05:15.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.018 EAL: Restoring previous memory policy: 4 00:05:16.018 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.018 EAL: request: mp_malloc_sync 00:05:16.018 EAL: No shared files mode enabled, IPC is disabled 00:05:16.018 EAL: Heap on socket 0 was expanded by 1026MB 00:05:16.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.277 EAL: request: mp_malloc_sync 00:05:16.277 EAL: No shared files mode enabled, IPC is disabled 00:05:16.277 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:16.277 passed 00:05:16.277 00:05:16.277 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.277 suites 1 1 n/a 0 0 00:05:16.277 tests 2 2 2 0 0 00:05:16.277 asserts 497 497 497 0 n/a 00:05:16.277 00:05:16.277 Elapsed time = 0.979 seconds 00:05:16.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.277 EAL: request: mp_malloc_sync 00:05:16.277 EAL: No shared files mode enabled, IPC is disabled 00:05:16.277 EAL: Heap on socket 0 was shrunk by 2MB 00:05:16.277 EAL: No shared files mode enabled, IPC is disabled 00:05:16.277 EAL: No shared files mode enabled, IPC is disabled 00:05:16.277 EAL: No shared files mode enabled, IPC is disabled 00:05:16.277 00:05:16.277 real 0m1.119s 00:05:16.277 user 0m0.648s 00:05:16.277 sys 0m0.446s 00:05:16.277 13:12:18 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.277 13:12:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:16.277 ************************************ 00:05:16.277 END TEST env_vtophys 00:05:16.277 ************************************ 00:05:16.277 13:12:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:16.277 13:12:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.277 13:12:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.277 13:12:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.536 ************************************ 00:05:16.536 START TEST env_pci 00:05:16.536 ************************************ 00:05:16.536 13:12:18 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:16.536 00:05:16.536 00:05:16.536 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.536 http://cunit.sourceforge.net/ 00:05:16.536 00:05:16.536 00:05:16.536 Suite: pci 00:05:16.537 Test: pci_hook ...[2024-12-09 13:12:18.542250] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 280071 has claimed it 00:05:16.537 EAL: Cannot find device (10000:00:01.0) 00:05:16.537 EAL: Failed to attach device on primary process 00:05:16.537 passed 00:05:16.537 00:05:16.537 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.537 suites 1 1 n/a 0 0 00:05:16.537 tests 1 1 1 0 0 00:05:16.537 asserts 25 25 25 0 n/a 00:05:16.537 00:05:16.537 Elapsed time = 0.035 seconds 00:05:16.537 00:05:16.537 real 0m0.055s 00:05:16.537 user 0m0.015s 00:05:16.537 sys 0m0.040s 00:05:16.537 13:12:18 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.537 13:12:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:16.537 ************************************ 00:05:16.537 END TEST env_pci 00:05:16.537 ************************************ 00:05:16.537 13:12:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.537 13:12:18 env -- env/env.sh@15 -- # uname 00:05:16.537 13:12:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.537 13:12:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.537 13:12:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.537 13:12:18 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:16.537 13:12:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.537 13:12:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.537 ************************************ 00:05:16.537 START TEST env_dpdk_post_init 00:05:16.537 ************************************ 00:05:16.537 13:12:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.537 EAL: Detected CPU lcores: 112 00:05:16.537 EAL: Detected NUMA nodes: 2 00:05:16.537 EAL: Detected static linkage of DPDK 00:05:16.537 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.537 EAL: Selected IOVA mode 'VA' 00:05:16.537 EAL: VFIO support initialized 00:05:16.537 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.797 EAL: Using IOMMU type 1 (Type 1) 00:05:17.366 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:21.559 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:21.559 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:21.559 Starting DPDK initialization... 00:05:21.559 Starting SPDK post initialization... 00:05:21.559 SPDK NVMe probe 00:05:21.559 Attaching to 0000:d8:00.0 00:05:21.559 Attached to 0000:d8:00.0 00:05:21.559 Cleaning up... 00:05:21.559 00:05:21.559 real 0m4.719s 00:05:21.559 user 0m3.303s 00:05:21.559 sys 0m0.660s 00:05:21.560 13:12:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.560 13:12:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 ************************************ 00:05:21.560 END TEST env_dpdk_post_init 00:05:21.560 ************************************ 00:05:21.560 13:12:23 env -- env/env.sh@26 -- # uname 00:05:21.560 13:12:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:21.560 13:12:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.560 13:12:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.560 13:12:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.560 13:12:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 ************************************ 00:05:21.560 START TEST env_mem_callbacks 00:05:21.560 ************************************ 00:05:21.560 13:12:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.560 EAL: Detected CPU lcores: 112 00:05:21.560 EAL: Detected NUMA nodes: 2 00:05:21.560 EAL: Detected static linkage of DPDK 00:05:21.560 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.560 EAL: Selected IOVA mode 'VA' 00:05:21.560 EAL: VFIO support initialized 00:05:21.560 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.560 00:05:21.560 00:05:21.560 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.560 http://cunit.sourceforge.net/ 00:05:21.560 00:05:21.560 00:05:21.560 Suite: memory 00:05:21.560 Test: test ... 00:05:21.560 register 0x200000200000 2097152 00:05:21.560 malloc 3145728 00:05:21.560 register 0x200000400000 4194304 00:05:21.560 buf 0x200000500000 len 3145728 PASSED 00:05:21.560 malloc 64 00:05:21.560 buf 0x2000004fff40 len 64 PASSED 00:05:21.560 malloc 4194304 00:05:21.560 register 0x200000800000 6291456 00:05:21.560 buf 0x200000a00000 len 4194304 PASSED 00:05:21.560 free 0x200000500000 3145728 00:05:21.560 free 0x2000004fff40 64 00:05:21.560 unregister 0x200000400000 4194304 PASSED 00:05:21.560 free 0x200000a00000 4194304 00:05:21.560 unregister 0x200000800000 6291456 PASSED 00:05:21.560 malloc 8388608 00:05:21.560 register 0x200000400000 10485760 00:05:21.560 buf 0x200000600000 len 8388608 PASSED 00:05:21.560 free 0x200000600000 8388608 00:05:21.560 unregister 0x200000400000 10485760 PASSED 00:05:21.560 passed 00:05:21.560 00:05:21.560 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.560 suites 1 1 n/a 0 0 00:05:21.560 tests 1 1 1 0 0 00:05:21.560 asserts 15 15 15 0 n/a 00:05:21.560 00:05:21.560 Elapsed time = 0.008 seconds 00:05:21.560 00:05:21.560 real 0m0.071s 00:05:21.560 user 0m0.019s 00:05:21.560 sys 0m0.052s 00:05:21.560 13:12:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.560 13:12:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 ************************************ 00:05:21.560 END TEST env_mem_callbacks 00:05:21.560 ************************************ 00:05:21.560 00:05:21.560 real 0m6.681s 00:05:21.560 user 0m4.343s 00:05:21.560 sys 0m1.603s 00:05:21.560 13:12:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.560 13:12:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 ************************************ 00:05:21.560 END TEST env 00:05:21.560 ************************************ 00:05:21.560 13:12:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.560 13:12:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.560 13:12:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.560 13:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 ************************************ 00:05:21.560 START TEST rpc 00:05:21.560 ************************************ 00:05:21.560 13:12:23 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.560 * Looking for test storage... 00:05:21.560 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:21.560 13:12:23 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.560 13:12:23 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.560 13:12:23 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.820 13:12:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.820 13:12:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.820 13:12:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.820 13:12:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.820 13:12:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.820 13:12:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.820 13:12:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.820 13:12:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.820 13:12:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.820 13:12:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.820 13:12:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.820 13:12:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.820 13:12:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.820 13:12:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.820 13:12:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.820 --rc genhtml_branch_coverage=1 00:05:21.820 --rc genhtml_function_coverage=1 00:05:21.820 --rc genhtml_legend=1 00:05:21.820 --rc geninfo_all_blocks=1 00:05:21.820 --rc geninfo_unexecuted_blocks=1 00:05:21.820 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.820 ' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.820 --rc genhtml_branch_coverage=1 00:05:21.820 --rc genhtml_function_coverage=1 00:05:21.820 --rc genhtml_legend=1 00:05:21.820 --rc geninfo_all_blocks=1 00:05:21.820 --rc geninfo_unexecuted_blocks=1 00:05:21.820 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.820 ' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.820 --rc genhtml_branch_coverage=1 00:05:21.820 --rc genhtml_function_coverage=1 00:05:21.820 --rc genhtml_legend=1 00:05:21.820 --rc geninfo_all_blocks=1 00:05:21.820 --rc geninfo_unexecuted_blocks=1 00:05:21.820 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.820 ' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.820 --rc genhtml_branch_coverage=1 00:05:21.820 --rc genhtml_function_coverage=1 00:05:21.820 --rc genhtml_legend=1 00:05:21.820 --rc geninfo_all_blocks=1 00:05:21.820 --rc geninfo_unexecuted_blocks=1 00:05:21.820 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.820 ' 00:05:21.820 13:12:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=281231 00:05:21.820 13:12:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.820 13:12:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:21.820 13:12:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 281231 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 281231 ']' 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.820 13:12:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.820 [2024-12-09 13:12:23.901377] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:21.820 [2024-12-09 13:12:23.901443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281231 ] 00:05:21.820 [2024-12-09 13:12:23.983350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.820 [2024-12-09 13:12:24.021779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.820 [2024-12-09 13:12:24.021818] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 281231' to capture a snapshot of events at runtime. 00:05:21.820 [2024-12-09 13:12:24.021827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.820 [2024-12-09 13:12:24.021835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.820 [2024-12-09 13:12:24.021842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid281231 for offline analysis/debug. 00:05:21.820 [2024-12-09 13:12:24.022416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.079 13:12:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.079 13:12:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.079 13:12:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:22.079 13:12:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:22.079 13:12:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:22.079 13:12:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:22.079 13:12:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.079 13:12:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.080 13:12:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.080 ************************************ 00:05:22.080 START TEST rpc_integrity 00:05:22.080 ************************************ 00:05:22.080 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:22.080 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.080 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.080 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.080 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.080 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.080 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.339 { 00:05:22.339 "name": "Malloc0", 00:05:22.339 "aliases": [ 00:05:22.339 "a9499b04-2629-4463-895d-44c53cf8b2b8" 00:05:22.339 ], 00:05:22.339 "product_name": "Malloc disk", 00:05:22.339 "block_size": 512, 00:05:22.339 "num_blocks": 16384, 00:05:22.339 "uuid": "a9499b04-2629-4463-895d-44c53cf8b2b8", 00:05:22.339 "assigned_rate_limits": { 00:05:22.339 "rw_ios_per_sec": 0, 00:05:22.339 "rw_mbytes_per_sec": 0, 00:05:22.339 "r_mbytes_per_sec": 0, 00:05:22.339 "w_mbytes_per_sec": 0 00:05:22.339 }, 00:05:22.339 "claimed": false, 00:05:22.339 "zoned": false, 00:05:22.339 "supported_io_types": { 00:05:22.339 "read": true, 00:05:22.339 "write": true, 00:05:22.339 "unmap": true, 00:05:22.339 "flush": true, 00:05:22.339 "reset": true, 00:05:22.339 "nvme_admin": false, 00:05:22.339 "nvme_io": false, 00:05:22.339 "nvme_io_md": false, 00:05:22.339 "write_zeroes": true, 00:05:22.339 "zcopy": true, 00:05:22.339 "get_zone_info": false, 00:05:22.339 "zone_management": false, 00:05:22.339 "zone_append": false, 00:05:22.339 "compare": false, 00:05:22.339 "compare_and_write": false, 00:05:22.339 "abort": true, 00:05:22.339 "seek_hole": false, 00:05:22.339 "seek_data": false, 00:05:22.339 "copy": true, 00:05:22.339 "nvme_iov_md": false 00:05:22.339 }, 00:05:22.339 "memory_domains": [ 00:05:22.339 { 00:05:22.339 "dma_device_id": "system", 00:05:22.339 "dma_device_type": 1 00:05:22.339 }, 00:05:22.339 { 00:05:22.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.339 "dma_device_type": 2 00:05:22.339 } 00:05:22.339 ], 00:05:22.339 "driver_specific": {} 00:05:22.339 } 00:05:22.339 ]' 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 [2024-12-09 13:12:24.426226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.339 [2024-12-09 13:12:24.426260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.339 [2024-12-09 13:12:24.426275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5427d00 00:05:22.339 [2024-12-09 13:12:24.426301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.339 [2024-12-09 13:12:24.427241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.339 [2024-12-09 13:12:24.427265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.339 Passthru0 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.339 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.339 { 00:05:22.339 "name": "Malloc0", 00:05:22.339 "aliases": [ 00:05:22.339 "a9499b04-2629-4463-895d-44c53cf8b2b8" 00:05:22.339 ], 00:05:22.339 "product_name": "Malloc disk", 00:05:22.339 "block_size": 512, 00:05:22.339 "num_blocks": 16384, 00:05:22.339 "uuid": "a9499b04-2629-4463-895d-44c53cf8b2b8", 00:05:22.339 "assigned_rate_limits": { 00:05:22.339 "rw_ios_per_sec": 0, 00:05:22.339 "rw_mbytes_per_sec": 0, 00:05:22.339 "r_mbytes_per_sec": 0, 00:05:22.339 "w_mbytes_per_sec": 0 00:05:22.339 }, 00:05:22.339 "claimed": true, 00:05:22.339 "claim_type": "exclusive_write", 00:05:22.339 "zoned": false, 00:05:22.339 "supported_io_types": { 00:05:22.339 "read": true, 00:05:22.339 "write": true, 00:05:22.339 "unmap": true, 00:05:22.339 "flush": true, 00:05:22.339 "reset": true, 00:05:22.339 "nvme_admin": false, 00:05:22.339 "nvme_io": false, 00:05:22.339 "nvme_io_md": false, 00:05:22.339 "write_zeroes": true, 00:05:22.339 "zcopy": true, 00:05:22.339 "get_zone_info": false, 00:05:22.339 "zone_management": false, 00:05:22.339 "zone_append": false, 00:05:22.339 "compare": false, 00:05:22.339 "compare_and_write": false, 00:05:22.339 "abort": true, 00:05:22.339 "seek_hole": false, 00:05:22.339 "seek_data": false, 00:05:22.339 "copy": true, 00:05:22.339 "nvme_iov_md": false 00:05:22.339 }, 00:05:22.339 "memory_domains": [ 00:05:22.339 { 00:05:22.339 "dma_device_id": "system", 00:05:22.339 "dma_device_type": 1 00:05:22.339 }, 00:05:22.339 { 00:05:22.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.340 "dma_device_type": 2 00:05:22.340 } 00:05:22.340 ], 00:05:22.340 "driver_specific": {} 00:05:22.340 }, 00:05:22.340 { 00:05:22.340 "name": "Passthru0", 00:05:22.340 "aliases": [ 00:05:22.340 "578e30b8-8f6d-592a-8544-31b926f8c4a3" 00:05:22.340 ], 00:05:22.340 "product_name": "passthru", 00:05:22.340 "block_size": 512, 00:05:22.340 "num_blocks": 16384, 00:05:22.340 "uuid": "578e30b8-8f6d-592a-8544-31b926f8c4a3", 00:05:22.340 "assigned_rate_limits": { 00:05:22.340 "rw_ios_per_sec": 0, 00:05:22.340 "rw_mbytes_per_sec": 0, 00:05:22.340 "r_mbytes_per_sec": 0, 00:05:22.340 "w_mbytes_per_sec": 0 00:05:22.340 }, 00:05:22.340 "claimed": false, 00:05:22.340 "zoned": false, 00:05:22.340 "supported_io_types": { 00:05:22.340 "read": true, 00:05:22.340 "write": true, 00:05:22.340 "unmap": true, 00:05:22.340 "flush": true, 00:05:22.340 "reset": true, 00:05:22.340 "nvme_admin": false, 00:05:22.340 "nvme_io": false, 00:05:22.340 "nvme_io_md": false, 00:05:22.340 "write_zeroes": true, 00:05:22.340 "zcopy": true, 00:05:22.340 "get_zone_info": false, 00:05:22.340 "zone_management": false, 00:05:22.340 "zone_append": false, 00:05:22.340 "compare": false, 00:05:22.340 "compare_and_write": false, 00:05:22.340 "abort": true, 00:05:22.340 "seek_hole": false, 00:05:22.340 "seek_data": false, 00:05:22.340 "copy": true, 00:05:22.340 "nvme_iov_md": false 00:05:22.340 }, 00:05:22.340 "memory_domains": [ 00:05:22.340 { 00:05:22.340 "dma_device_id": "system", 00:05:22.340 "dma_device_type": 1 00:05:22.340 }, 00:05:22.340 { 00:05:22.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.340 "dma_device_type": 2 00:05:22.340 } 00:05:22.340 ], 00:05:22.340 "driver_specific": { 00:05:22.340 "passthru": { 00:05:22.340 "name": "Passthru0", 00:05:22.340 "base_bdev_name": "Malloc0" 00:05:22.340 } 00:05:22.340 } 00:05:22.340 } 00:05:22.340 ]' 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.340 13:12:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.340 00:05:22.340 real 0m0.299s 00:05:22.340 user 0m0.184s 00:05:22.340 sys 0m0.052s 00:05:22.340 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.599 13:12:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.599 ************************************ 00:05:22.599 END TEST rpc_integrity 00:05:22.599 ************************************ 00:05:22.599 13:12:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:22.599 13:12:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.599 13:12:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.599 13:12:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.599 ************************************ 00:05:22.599 START TEST rpc_plugins 00:05:22.599 ************************************ 00:05:22.599 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:22.599 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:22.599 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.599 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:22.600 { 00:05:22.600 "name": "Malloc1", 00:05:22.600 "aliases": [ 00:05:22.600 "c3173796-5d01-485c-bb50-66d09b47853f" 00:05:22.600 ], 00:05:22.600 "product_name": "Malloc disk", 00:05:22.600 "block_size": 4096, 00:05:22.600 "num_blocks": 256, 00:05:22.600 "uuid": "c3173796-5d01-485c-bb50-66d09b47853f", 00:05:22.600 "assigned_rate_limits": { 00:05:22.600 "rw_ios_per_sec": 0, 00:05:22.600 "rw_mbytes_per_sec": 0, 00:05:22.600 "r_mbytes_per_sec": 0, 00:05:22.600 "w_mbytes_per_sec": 0 00:05:22.600 }, 00:05:22.600 "claimed": false, 00:05:22.600 "zoned": false, 00:05:22.600 "supported_io_types": { 00:05:22.600 "read": true, 00:05:22.600 "write": true, 00:05:22.600 "unmap": true, 00:05:22.600 "flush": true, 00:05:22.600 "reset": true, 00:05:22.600 "nvme_admin": false, 00:05:22.600 "nvme_io": false, 00:05:22.600 "nvme_io_md": false, 00:05:22.600 "write_zeroes": true, 00:05:22.600 "zcopy": true, 00:05:22.600 "get_zone_info": false, 00:05:22.600 "zone_management": false, 00:05:22.600 "zone_append": false, 00:05:22.600 "compare": false, 00:05:22.600 "compare_and_write": false, 00:05:22.600 "abort": true, 00:05:22.600 "seek_hole": false, 00:05:22.600 "seek_data": false, 00:05:22.600 "copy": true, 00:05:22.600 "nvme_iov_md": false 00:05:22.600 }, 00:05:22.600 "memory_domains": [ 00:05:22.600 { 00:05:22.600 "dma_device_id": "system", 00:05:22.600 "dma_device_type": 1 00:05:22.600 }, 00:05:22.600 { 00:05:22.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.600 "dma_device_type": 2 00:05:22.600 } 00:05:22.600 ], 00:05:22.600 "driver_specific": {} 00:05:22.600 } 00:05:22.600 ]' 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:22.600 13:12:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:22.600 00:05:22.600 real 0m0.132s 00:05:22.600 user 0m0.078s 00:05:22.600 sys 0m0.020s 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.600 13:12:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.600 ************************************ 00:05:22.600 END TEST rpc_plugins 00:05:22.600 ************************************ 00:05:22.600 13:12:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:22.600 13:12:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.600 13:12:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.600 13:12:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.859 ************************************ 00:05:22.859 START TEST rpc_trace_cmd_test 00:05:22.859 ************************************ 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.859 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid281231", 00:05:22.859 "tpoint_group_mask": "0x8", 00:05:22.859 "iscsi_conn": { 00:05:22.859 "mask": "0x2", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "scsi": { 00:05:22.859 "mask": "0x4", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "bdev": { 00:05:22.859 "mask": "0x8", 00:05:22.859 "tpoint_mask": "0xffffffffffffffff" 00:05:22.859 }, 00:05:22.859 "nvmf_rdma": { 00:05:22.859 "mask": "0x10", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "nvmf_tcp": { 00:05:22.859 "mask": "0x20", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "ftl": { 00:05:22.859 "mask": "0x40", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "blobfs": { 00:05:22.859 "mask": "0x80", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "dsa": { 00:05:22.859 "mask": "0x200", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "thread": { 00:05:22.859 "mask": "0x400", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "nvme_pcie": { 00:05:22.859 "mask": "0x800", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "iaa": { 00:05:22.859 "mask": "0x1000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "nvme_tcp": { 00:05:22.859 "mask": "0x2000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "bdev_nvme": { 00:05:22.859 "mask": "0x4000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "sock": { 00:05:22.859 "mask": "0x8000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "blob": { 00:05:22.859 "mask": "0x10000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "bdev_raid": { 00:05:22.859 "mask": "0x20000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 }, 00:05:22.859 "scheduler": { 00:05:22.859 "mask": "0x40000", 00:05:22.859 "tpoint_mask": "0x0" 00:05:22.859 } 00:05:22.859 }' 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.859 13:12:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.859 00:05:22.859 real 0m0.209s 00:05:22.859 user 0m0.171s 00:05:22.859 sys 0m0.031s 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.859 13:12:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.859 ************************************ 00:05:22.859 END TEST rpc_trace_cmd_test 00:05:22.859 ************************************ 00:05:23.117 13:12:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:23.117 13:12:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:23.117 13:12:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:23.117 13:12:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.117 13:12:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.117 13:12:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.117 ************************************ 00:05:23.117 START TEST rpc_daemon_integrity 00:05:23.117 ************************************ 00:05:23.117 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:23.117 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.118 { 00:05:23.118 "name": "Malloc2", 00:05:23.118 "aliases": [ 00:05:23.118 "cd54976b-05f6-4582-a2ad-e0aa9097c177" 00:05:23.118 ], 00:05:23.118 "product_name": "Malloc disk", 00:05:23.118 "block_size": 512, 00:05:23.118 "num_blocks": 16384, 00:05:23.118 "uuid": "cd54976b-05f6-4582-a2ad-e0aa9097c177", 00:05:23.118 "assigned_rate_limits": { 00:05:23.118 "rw_ios_per_sec": 0, 00:05:23.118 "rw_mbytes_per_sec": 0, 00:05:23.118 "r_mbytes_per_sec": 0, 00:05:23.118 "w_mbytes_per_sec": 0 00:05:23.118 }, 00:05:23.118 "claimed": false, 00:05:23.118 "zoned": false, 00:05:23.118 "supported_io_types": { 00:05:23.118 "read": true, 00:05:23.118 "write": true, 00:05:23.118 "unmap": true, 00:05:23.118 "flush": true, 00:05:23.118 "reset": true, 00:05:23.118 "nvme_admin": false, 00:05:23.118 "nvme_io": false, 00:05:23.118 "nvme_io_md": false, 00:05:23.118 "write_zeroes": true, 00:05:23.118 "zcopy": true, 00:05:23.118 "get_zone_info": false, 00:05:23.118 "zone_management": false, 00:05:23.118 "zone_append": false, 00:05:23.118 "compare": false, 00:05:23.118 "compare_and_write": false, 00:05:23.118 "abort": true, 00:05:23.118 "seek_hole": false, 00:05:23.118 "seek_data": false, 00:05:23.118 "copy": true, 00:05:23.118 "nvme_iov_md": false 00:05:23.118 }, 00:05:23.118 "memory_domains": [ 00:05:23.118 { 00:05:23.118 "dma_device_id": "system", 00:05:23.118 "dma_device_type": 1 00:05:23.118 }, 00:05:23.118 { 00:05:23.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.118 "dma_device_type": 2 00:05:23.118 } 00:05:23.118 ], 00:05:23.118 "driver_specific": {} 00:05:23.118 } 00:05:23.118 ]' 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.118 [2024-12-09 13:12:25.316512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:23.118 [2024-12-09 13:12:25.316544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.118 [2024-12-09 13:12:25.316562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x52fa790 00:05:23.118 [2024-12-09 13:12:25.316571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.118 [2024-12-09 13:12:25.317307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.118 [2024-12-09 13:12:25.317331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.118 Passthru0 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.118 { 00:05:23.118 "name": "Malloc2", 00:05:23.118 "aliases": [ 00:05:23.118 "cd54976b-05f6-4582-a2ad-e0aa9097c177" 00:05:23.118 ], 00:05:23.118 "product_name": "Malloc disk", 00:05:23.118 "block_size": 512, 00:05:23.118 "num_blocks": 16384, 00:05:23.118 "uuid": "cd54976b-05f6-4582-a2ad-e0aa9097c177", 00:05:23.118 "assigned_rate_limits": { 00:05:23.118 "rw_ios_per_sec": 0, 00:05:23.118 "rw_mbytes_per_sec": 0, 00:05:23.118 "r_mbytes_per_sec": 0, 00:05:23.118 "w_mbytes_per_sec": 0 00:05:23.118 }, 00:05:23.118 "claimed": true, 00:05:23.118 "claim_type": "exclusive_write", 00:05:23.118 "zoned": false, 00:05:23.118 "supported_io_types": { 00:05:23.118 "read": true, 00:05:23.118 "write": true, 00:05:23.118 "unmap": true, 00:05:23.118 "flush": true, 00:05:23.118 "reset": true, 00:05:23.118 "nvme_admin": false, 00:05:23.118 "nvme_io": false, 00:05:23.118 "nvme_io_md": false, 00:05:23.118 "write_zeroes": true, 00:05:23.118 "zcopy": true, 00:05:23.118 "get_zone_info": false, 00:05:23.118 "zone_management": false, 00:05:23.118 "zone_append": false, 00:05:23.118 "compare": false, 00:05:23.118 "compare_and_write": false, 00:05:23.118 "abort": true, 00:05:23.118 "seek_hole": false, 00:05:23.118 "seek_data": false, 00:05:23.118 "copy": true, 00:05:23.118 "nvme_iov_md": false 00:05:23.118 }, 00:05:23.118 "memory_domains": [ 00:05:23.118 { 00:05:23.118 "dma_device_id": "system", 00:05:23.118 "dma_device_type": 1 00:05:23.118 }, 00:05:23.118 { 00:05:23.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.118 "dma_device_type": 2 00:05:23.118 } 00:05:23.118 ], 00:05:23.118 "driver_specific": {} 00:05:23.118 }, 00:05:23.118 { 00:05:23.118 "name": "Passthru0", 00:05:23.118 "aliases": [ 00:05:23.118 "f0dcd217-0b0e-5e7e-b3f9-7e3aea1a57c5" 00:05:23.118 ], 00:05:23.118 "product_name": "passthru", 00:05:23.118 "block_size": 512, 00:05:23.118 "num_blocks": 16384, 00:05:23.118 "uuid": "f0dcd217-0b0e-5e7e-b3f9-7e3aea1a57c5", 00:05:23.118 "assigned_rate_limits": { 00:05:23.118 "rw_ios_per_sec": 0, 00:05:23.118 "rw_mbytes_per_sec": 0, 00:05:23.118 "r_mbytes_per_sec": 0, 00:05:23.118 "w_mbytes_per_sec": 0 00:05:23.118 }, 00:05:23.118 "claimed": false, 00:05:23.118 "zoned": false, 00:05:23.118 "supported_io_types": { 00:05:23.118 "read": true, 00:05:23.118 "write": true, 00:05:23.118 "unmap": true, 00:05:23.118 "flush": true, 00:05:23.118 "reset": true, 00:05:23.118 "nvme_admin": false, 00:05:23.118 "nvme_io": false, 00:05:23.118 "nvme_io_md": false, 00:05:23.118 "write_zeroes": true, 00:05:23.118 "zcopy": true, 00:05:23.118 "get_zone_info": false, 00:05:23.118 "zone_management": false, 00:05:23.118 "zone_append": false, 00:05:23.118 "compare": false, 00:05:23.118 "compare_and_write": false, 00:05:23.118 "abort": true, 00:05:23.118 "seek_hole": false, 00:05:23.118 "seek_data": false, 00:05:23.118 "copy": true, 00:05:23.118 "nvme_iov_md": false 00:05:23.118 }, 00:05:23.118 "memory_domains": [ 00:05:23.118 { 00:05:23.118 "dma_device_id": "system", 00:05:23.118 "dma_device_type": 1 00:05:23.118 }, 00:05:23.118 { 00:05:23.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.118 "dma_device_type": 2 00:05:23.118 } 00:05:23.118 ], 00:05:23.118 "driver_specific": { 00:05:23.118 "passthru": { 00:05:23.118 "name": "Passthru0", 00:05:23.118 "base_bdev_name": "Malloc2" 00:05:23.118 } 00:05:23.118 } 00:05:23.118 } 00:05:23.118 ]' 00:05:23.118 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:23.377 00:05:23.377 real 0m0.300s 00:05:23.377 user 0m0.180s 00:05:23.377 sys 0m0.056s 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.377 13:12:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.377 ************************************ 00:05:23.377 END TEST rpc_daemon_integrity 00:05:23.377 ************************************ 00:05:23.377 13:12:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:23.377 13:12:25 rpc -- rpc/rpc.sh@84 -- # killprocess 281231 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 281231 ']' 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@958 -- # kill -0 281231 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281231 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281231' 00:05:23.377 killing process with pid 281231 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@973 -- # kill 281231 00:05:23.377 13:12:25 rpc -- common/autotest_common.sh@978 -- # wait 281231 00:05:23.636 00:05:23.636 real 0m2.193s 00:05:23.636 user 0m2.730s 00:05:23.636 sys 0m0.848s 00:05:23.636 13:12:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.636 13:12:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.636 ************************************ 00:05:23.636 END TEST rpc 00:05:23.636 ************************************ 00:05:23.896 13:12:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.896 13:12:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.896 13:12:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.896 13:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.896 ************************************ 00:05:23.896 START TEST skip_rpc 00:05:23.896 ************************************ 00:05:23.896 13:12:25 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.896 * Looking for test storage... 00:05:23.896 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:23.896 13:12:26 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.896 13:12:26 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.896 13:12:26 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.896 13:12:26 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.896 13:12:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.156 13:12:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.156 --rc genhtml_branch_coverage=1 00:05:24.156 --rc genhtml_function_coverage=1 00:05:24.156 --rc genhtml_legend=1 00:05:24.156 --rc geninfo_all_blocks=1 00:05:24.156 --rc geninfo_unexecuted_blocks=1 00:05:24.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.156 ' 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.156 --rc genhtml_branch_coverage=1 00:05:24.156 --rc genhtml_function_coverage=1 00:05:24.156 --rc genhtml_legend=1 00:05:24.156 --rc geninfo_all_blocks=1 00:05:24.156 --rc geninfo_unexecuted_blocks=1 00:05:24.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.156 ' 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.156 --rc genhtml_branch_coverage=1 00:05:24.156 --rc genhtml_function_coverage=1 00:05:24.156 --rc genhtml_legend=1 00:05:24.156 --rc geninfo_all_blocks=1 00:05:24.156 --rc geninfo_unexecuted_blocks=1 00:05:24.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.156 ' 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.156 --rc genhtml_branch_coverage=1 00:05:24.156 --rc genhtml_function_coverage=1 00:05:24.156 --rc genhtml_legend=1 00:05:24.156 --rc geninfo_all_blocks=1 00:05:24.156 --rc geninfo_unexecuted_blocks=1 00:05:24.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.156 ' 00:05:24.156 13:12:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:24.156 13:12:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:24.156 13:12:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.156 13:12:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.156 ************************************ 00:05:24.156 START TEST skip_rpc 00:05:24.156 ************************************ 00:05:24.156 13:12:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:24.156 13:12:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=281689 00:05:24.156 13:12:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.156 13:12:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:24.156 13:12:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:24.156 [2024-12-09 13:12:26.218248] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:24.156 [2024-12-09 13:12:26.218303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281689 ] 00:05:24.156 [2024-12-09 13:12:26.302465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.156 [2024-12-09 13:12:26.343404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 281689 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 281689 ']' 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 281689 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281689 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281689' 00:05:29.430 killing process with pid 281689 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 281689 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 281689 00:05:29.430 00:05:29.430 real 0m5.375s 00:05:29.430 user 0m5.130s 00:05:29.430 sys 0m0.299s 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.430 13:12:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 ************************************ 00:05:29.430 END TEST skip_rpc 00:05:29.430 ************************************ 00:05:29.431 13:12:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:29.431 13:12:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.431 13:12:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.431 13:12:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.431 ************************************ 00:05:29.431 START TEST skip_rpc_with_json 00:05:29.431 ************************************ 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=282772 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 282772 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 282772 ']' 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.431 13:12:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.690 [2024-12-09 13:12:31.678052] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:29.690 [2024-12-09 13:12:31.678132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282772 ] 00:05:29.690 [2024-12-09 13:12:31.763901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.690 [2024-12-09 13:12:31.805465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.950 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.950 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:29.950 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.950 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 [2024-12-09 13:12:32.025523] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.951 request: 00:05:29.951 { 00:05:29.951 "trtype": "tcp", 00:05:29.951 "method": "nvmf_get_transports", 00:05:29.951 "req_id": 1 00:05:29.951 } 00:05:29.951 Got JSON-RPC error response 00:05:29.951 response: 00:05:29.951 { 00:05:29.951 "code": -19, 00:05:29.951 "message": "No such device" 00:05:29.951 } 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 [2024-12-09 13:12:32.037622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.951 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.210 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:30.210 { 00:05:30.210 "subsystems": [ 00:05:30.210 { 00:05:30.210 "subsystem": "scheduler", 00:05:30.210 "config": [ 00:05:30.210 { 00:05:30.210 "method": "framework_set_scheduler", 00:05:30.210 "params": { 00:05:30.210 "name": "static" 00:05:30.210 } 00:05:30.210 } 00:05:30.210 ] 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "subsystem": "vmd", 00:05:30.210 "config": [] 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "subsystem": "sock", 00:05:30.210 "config": [ 00:05:30.210 { 00:05:30.210 "method": "sock_set_default_impl", 00:05:30.210 "params": { 00:05:30.210 "impl_name": "posix" 00:05:30.210 } 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "method": "sock_impl_set_options", 00:05:30.210 "params": { 00:05:30.210 "impl_name": "ssl", 00:05:30.210 "recv_buf_size": 4096, 00:05:30.210 "send_buf_size": 4096, 00:05:30.210 "enable_recv_pipe": true, 00:05:30.210 "enable_quickack": false, 00:05:30.210 "enable_placement_id": 0, 00:05:30.210 "enable_zerocopy_send_server": true, 00:05:30.210 "enable_zerocopy_send_client": false, 00:05:30.210 "zerocopy_threshold": 0, 00:05:30.210 "tls_version": 0, 00:05:30.210 "enable_ktls": false 00:05:30.210 } 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "method": "sock_impl_set_options", 00:05:30.210 "params": { 00:05:30.210 "impl_name": "posix", 00:05:30.210 "recv_buf_size": 2097152, 00:05:30.210 "send_buf_size": 2097152, 00:05:30.210 "enable_recv_pipe": true, 00:05:30.210 "enable_quickack": false, 00:05:30.210 "enable_placement_id": 0, 00:05:30.210 "enable_zerocopy_send_server": true, 00:05:30.210 "enable_zerocopy_send_client": false, 00:05:30.210 "zerocopy_threshold": 0, 00:05:30.210 "tls_version": 0, 00:05:30.210 "enable_ktls": false 00:05:30.210 } 00:05:30.210 } 00:05:30.210 ] 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "subsystem": "iobuf", 00:05:30.210 "config": [ 00:05:30.210 { 00:05:30.210 "method": "iobuf_set_options", 00:05:30.210 "params": { 00:05:30.210 "small_pool_count": 8192, 00:05:30.210 "large_pool_count": 1024, 00:05:30.210 "small_bufsize": 8192, 00:05:30.210 "large_bufsize": 135168, 00:05:30.210 "enable_numa": false 00:05:30.210 } 00:05:30.210 } 00:05:30.210 ] 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "subsystem": "keyring", 00:05:30.210 "config": [] 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "subsystem": "vfio_user_target", 00:05:30.210 "config": null 00:05:30.210 }, 00:05:30.210 { 00:05:30.211 "subsystem": "fsdev", 00:05:30.211 "config": [ 00:05:30.211 { 00:05:30.211 "method": "fsdev_set_opts", 00:05:30.211 "params": { 00:05:30.211 "fsdev_io_pool_size": 65535, 00:05:30.211 "fsdev_io_cache_size": 256 00:05:30.211 } 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "accel", 00:05:30.211 "config": [ 00:05:30.211 { 00:05:30.211 "method": "accel_set_options", 00:05:30.211 "params": { 00:05:30.211 "small_cache_size": 128, 00:05:30.211 "large_cache_size": 16, 00:05:30.211 "task_count": 2048, 00:05:30.211 "sequence_count": 2048, 00:05:30.211 "buf_count": 2048 00:05:30.211 } 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "bdev", 00:05:30.211 "config": [ 00:05:30.211 { 00:05:30.211 "method": "bdev_set_options", 00:05:30.211 "params": { 00:05:30.211 "bdev_io_pool_size": 65535, 00:05:30.211 "bdev_io_cache_size": 256, 00:05:30.211 "bdev_auto_examine": true, 00:05:30.211 "iobuf_small_cache_size": 128, 00:05:30.211 "iobuf_large_cache_size": 16 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "bdev_raid_set_options", 00:05:30.211 "params": { 00:05:30.211 "process_window_size_kb": 1024, 00:05:30.211 "process_max_bandwidth_mb_sec": 0 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "bdev_nvme_set_options", 00:05:30.211 "params": { 00:05:30.211 "action_on_timeout": "none", 00:05:30.211 "timeout_us": 0, 00:05:30.211 "timeout_admin_us": 0, 00:05:30.211 "keep_alive_timeout_ms": 10000, 00:05:30.211 "arbitration_burst": 0, 00:05:30.211 "low_priority_weight": 0, 00:05:30.211 "medium_priority_weight": 0, 00:05:30.211 "high_priority_weight": 0, 00:05:30.211 "nvme_adminq_poll_period_us": 10000, 00:05:30.211 "nvme_ioq_poll_period_us": 0, 00:05:30.211 "io_queue_requests": 0, 00:05:30.211 "delay_cmd_submit": true, 00:05:30.211 "transport_retry_count": 4, 00:05:30.211 "bdev_retry_count": 3, 00:05:30.211 "transport_ack_timeout": 0, 00:05:30.211 "ctrlr_loss_timeout_sec": 0, 00:05:30.211 "reconnect_delay_sec": 0, 00:05:30.211 "fast_io_fail_timeout_sec": 0, 00:05:30.211 "disable_auto_failback": false, 00:05:30.211 "generate_uuids": false, 00:05:30.211 "transport_tos": 0, 00:05:30.211 "nvme_error_stat": false, 00:05:30.211 "rdma_srq_size": 0, 00:05:30.211 "io_path_stat": false, 00:05:30.211 "allow_accel_sequence": false, 00:05:30.211 "rdma_max_cq_size": 0, 00:05:30.211 "rdma_cm_event_timeout_ms": 0, 00:05:30.211 "dhchap_digests": [ 00:05:30.211 "sha256", 00:05:30.211 "sha384", 00:05:30.211 "sha512" 00:05:30.211 ], 00:05:30.211 "dhchap_dhgroups": [ 00:05:30.211 "null", 00:05:30.211 "ffdhe2048", 00:05:30.211 "ffdhe3072", 00:05:30.211 "ffdhe4096", 00:05:30.211 "ffdhe6144", 00:05:30.211 "ffdhe8192" 00:05:30.211 ] 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "bdev_nvme_set_hotplug", 00:05:30.211 "params": { 00:05:30.211 "period_us": 100000, 00:05:30.211 "enable": false 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "bdev_iscsi_set_options", 00:05:30.211 "params": { 00:05:30.211 "timeout_sec": 30 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "bdev_wait_for_examine" 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "nvmf", 00:05:30.211 "config": [ 00:05:30.211 { 00:05:30.211 "method": "nvmf_set_config", 00:05:30.211 "params": { 00:05:30.211 "discovery_filter": "match_any", 00:05:30.211 "admin_cmd_passthru": { 00:05:30.211 "identify_ctrlr": false 00:05:30.211 }, 00:05:30.211 "dhchap_digests": [ 00:05:30.211 "sha256", 00:05:30.211 "sha384", 00:05:30.211 "sha512" 00:05:30.211 ], 00:05:30.211 "dhchap_dhgroups": [ 00:05:30.211 "null", 00:05:30.211 "ffdhe2048", 00:05:30.211 "ffdhe3072", 00:05:30.211 "ffdhe4096", 00:05:30.211 "ffdhe6144", 00:05:30.211 "ffdhe8192" 00:05:30.211 ] 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "nvmf_set_max_subsystems", 00:05:30.211 "params": { 00:05:30.211 "max_subsystems": 1024 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "nvmf_set_crdt", 00:05:30.211 "params": { 00:05:30.211 "crdt1": 0, 00:05:30.211 "crdt2": 0, 00:05:30.211 "crdt3": 0 00:05:30.211 } 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "method": "nvmf_create_transport", 00:05:30.211 "params": { 00:05:30.211 "trtype": "TCP", 00:05:30.211 "max_queue_depth": 128, 00:05:30.211 "max_io_qpairs_per_ctrlr": 127, 00:05:30.211 "in_capsule_data_size": 4096, 00:05:30.211 "max_io_size": 131072, 00:05:30.211 "io_unit_size": 131072, 00:05:30.211 "max_aq_depth": 128, 00:05:30.211 "num_shared_buffers": 511, 00:05:30.211 "buf_cache_size": 4294967295, 00:05:30.211 "dif_insert_or_strip": false, 00:05:30.211 "zcopy": false, 00:05:30.211 "c2h_success": true, 00:05:30.211 "sock_priority": 0, 00:05:30.211 "abort_timeout_sec": 1, 00:05:30.211 "ack_timeout": 0, 00:05:30.211 "data_wr_pool_size": 0 00:05:30.211 } 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "nbd", 00:05:30.211 "config": [] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "ublk", 00:05:30.211 "config": [] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "vhost_blk", 00:05:30.211 "config": [] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "scsi", 00:05:30.211 "config": null 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "iscsi", 00:05:30.211 "config": [ 00:05:30.211 { 00:05:30.211 "method": "iscsi_set_options", 00:05:30.211 "params": { 00:05:30.211 "node_base": "iqn.2016-06.io.spdk", 00:05:30.211 "max_sessions": 128, 00:05:30.211 "max_connections_per_session": 2, 00:05:30.211 "max_queue_depth": 64, 00:05:30.211 "default_time2wait": 2, 00:05:30.211 "default_time2retain": 20, 00:05:30.211 "first_burst_length": 8192, 00:05:30.211 "immediate_data": true, 00:05:30.211 "allow_duplicated_isid": false, 00:05:30.211 "error_recovery_level": 0, 00:05:30.211 "nop_timeout": 60, 00:05:30.211 "nop_in_interval": 30, 00:05:30.211 "disable_chap": false, 00:05:30.211 "require_chap": false, 00:05:30.211 "mutual_chap": false, 00:05:30.211 "chap_group": 0, 00:05:30.211 "max_large_datain_per_connection": 64, 00:05:30.211 "max_r2t_per_connection": 4, 00:05:30.211 "pdu_pool_size": 36864, 00:05:30.211 "immediate_data_pool_size": 16384, 00:05:30.211 "data_out_pool_size": 2048 00:05:30.211 } 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 }, 00:05:30.211 { 00:05:30.211 "subsystem": "vhost_scsi", 00:05:30.211 "config": [] 00:05:30.211 } 00:05:30.211 ] 00:05:30.211 } 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 282772 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 282772 ']' 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 282772 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282772 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282772' 00:05:30.211 killing process with pid 282772 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 282772 00:05:30.211 13:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 282772 00:05:30.471 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=282827 00:05:30.471 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:30.471 13:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 282827 ']' 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282827' 00:05:35.744 killing process with pid 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 282827 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:35.744 00:05:35.744 real 0m6.299s 00:05:35.744 user 0m5.957s 00:05:35.744 sys 0m0.664s 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.744 13:12:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.744 ************************************ 00:05:35.744 END TEST skip_rpc_with_json 00:05:35.744 ************************************ 00:05:36.004 13:12:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:36.004 13:12:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.004 13:12:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.004 13:12:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.004 ************************************ 00:05:36.004 START TEST skip_rpc_with_delay 00:05:36.004 ************************************ 00:05:36.004 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:36.004 13:12:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.004 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.005 [2024-12-09 13:12:38.060696] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.005 00:05:36.005 real 0m0.047s 00:05:36.005 user 0m0.020s 00:05:36.005 sys 0m0.027s 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.005 13:12:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 END TEST skip_rpc_with_delay 00:05:36.005 ************************************ 00:05:36.005 13:12:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:36.005 13:12:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:36.005 13:12:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:36.005 13:12:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.005 13:12:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.005 13:12:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 START TEST exit_on_failed_rpc_init 00:05:36.005 ************************************ 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=283910 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 283910 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 283910 ']' 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.005 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 [2024-12-09 13:12:38.192008] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:36.005 [2024-12-09 13:12:38.192093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283910 ] 00:05:36.264 [2024-12-09 13:12:38.280334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.264 [2024-12-09 13:12:38.323080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.523 [2024-12-09 13:12:38.566327] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:36.523 [2024-12-09 13:12:38.566416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283918 ] 00:05:36.523 [2024-12-09 13:12:38.652613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.523 [2024-12-09 13:12:38.692675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.523 [2024-12-09 13:12:38.692744] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:36.523 [2024-12-09 13:12:38.692756] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:36.523 [2024-12-09 13:12:38.692764] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 283910 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 283910 ']' 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 283910 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.523 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283910 00:05:36.782 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.782 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.782 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283910' 00:05:36.782 killing process with pid 283910 00:05:36.782 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 283910 00:05:36.782 13:12:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 283910 00:05:37.042 00:05:37.042 real 0m0.928s 00:05:37.042 user 0m0.949s 00:05:37.042 sys 0m0.426s 00:05:37.042 13:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.042 13:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 ************************************ 00:05:37.042 END TEST exit_on_failed_rpc_init 00:05:37.042 ************************************ 00:05:37.042 13:12:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:37.042 00:05:37.042 real 0m13.184s 00:05:37.042 user 0m12.267s 00:05:37.042 sys 0m1.782s 00:05:37.042 13:12:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.042 13:12:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 ************************************ 00:05:37.042 END TEST skip_rpc 00:05:37.042 ************************************ 00:05:37.042 13:12:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:37.042 13:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.042 13:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.042 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 ************************************ 00:05:37.042 START TEST rpc_client 00:05:37.042 ************************************ 00:05:37.042 13:12:39 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:37.302 * Looking for test storage... 00:05:37.302 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.303 13:12:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.303 --rc genhtml_branch_coverage=1 00:05:37.303 --rc genhtml_function_coverage=1 00:05:37.303 --rc genhtml_legend=1 00:05:37.303 --rc geninfo_all_blocks=1 00:05:37.303 --rc geninfo_unexecuted_blocks=1 00:05:37.303 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.303 ' 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.303 --rc genhtml_branch_coverage=1 00:05:37.303 --rc genhtml_function_coverage=1 00:05:37.303 --rc genhtml_legend=1 00:05:37.303 --rc geninfo_all_blocks=1 00:05:37.303 --rc geninfo_unexecuted_blocks=1 00:05:37.303 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.303 ' 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.303 --rc genhtml_branch_coverage=1 00:05:37.303 --rc genhtml_function_coverage=1 00:05:37.303 --rc genhtml_legend=1 00:05:37.303 --rc geninfo_all_blocks=1 00:05:37.303 --rc geninfo_unexecuted_blocks=1 00:05:37.303 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.303 ' 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.303 --rc genhtml_branch_coverage=1 00:05:37.303 --rc genhtml_function_coverage=1 00:05:37.303 --rc genhtml_legend=1 00:05:37.303 --rc geninfo_all_blocks=1 00:05:37.303 --rc geninfo_unexecuted_blocks=1 00:05:37.303 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.303 ' 00:05:37.303 13:12:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:37.303 OK 00:05:37.303 13:12:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:37.303 00:05:37.303 real 0m0.218s 00:05:37.303 user 0m0.115s 00:05:37.303 sys 0m0.121s 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.303 13:12:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:37.303 ************************************ 00:05:37.303 END TEST rpc_client 00:05:37.303 ************************************ 00:05:37.303 13:12:39 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:37.303 13:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.303 13:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.303 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:05:37.303 ************************************ 00:05:37.303 START TEST json_config 00:05:37.303 ************************************ 00:05:37.303 13:12:39 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:37.562 13:12:39 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.562 13:12:39 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.562 13:12:39 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.562 13:12:39 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.562 13:12:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.562 13:12:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.562 13:12:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.563 13:12:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.563 13:12:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.563 13:12:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.563 13:12:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.563 13:12:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:37.563 13:12:39 json_config -- scripts/common.sh@345 -- # : 1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.563 13:12:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.563 13:12:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.563 13:12:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.563 13:12:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.563 13:12:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.563 13:12:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.563 13:12:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.563 13:12:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.563 --rc genhtml_branch_coverage=1 00:05:37.563 --rc genhtml_function_coverage=1 00:05:37.563 --rc genhtml_legend=1 00:05:37.563 --rc geninfo_all_blocks=1 00:05:37.563 --rc geninfo_unexecuted_blocks=1 00:05:37.563 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.563 ' 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.563 --rc genhtml_branch_coverage=1 00:05:37.563 --rc genhtml_function_coverage=1 00:05:37.563 --rc genhtml_legend=1 00:05:37.563 --rc geninfo_all_blocks=1 00:05:37.563 --rc geninfo_unexecuted_blocks=1 00:05:37.563 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.563 ' 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.563 --rc genhtml_branch_coverage=1 00:05:37.563 --rc genhtml_function_coverage=1 00:05:37.563 --rc genhtml_legend=1 00:05:37.563 --rc geninfo_all_blocks=1 00:05:37.563 --rc geninfo_unexecuted_blocks=1 00:05:37.563 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.563 ' 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.563 --rc genhtml_branch_coverage=1 00:05:37.563 --rc genhtml_function_coverage=1 00:05:37.563 --rc genhtml_legend=1 00:05:37.563 --rc geninfo_all_blocks=1 00:05:37.563 --rc geninfo_unexecuted_blocks=1 00:05:37.563 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.563 ' 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:37.563 13:12:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.563 13:12:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.563 13:12:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.563 13:12:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.563 13:12:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.563 13:12:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.563 13:12:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.563 13:12:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:37.563 13:12:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.563 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.563 13:12:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:37.563 WARNING: No tests are enabled so not running JSON configuration tests 00:05:37.563 13:12:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:37.563 00:05:37.563 real 0m0.199s 00:05:37.563 user 0m0.115s 00:05:37.563 sys 0m0.092s 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.563 13:12:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.563 ************************************ 00:05:37.563 END TEST json_config 00:05:37.563 ************************************ 00:05:37.563 13:12:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.563 13:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.563 13:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.563 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:05:37.563 ************************************ 00:05:37.563 START TEST json_config_extra_key 00:05:37.563 ************************************ 00:05:37.563 13:12:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.823 13:12:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.823 --rc genhtml_branch_coverage=1 00:05:37.823 --rc genhtml_function_coverage=1 00:05:37.823 --rc genhtml_legend=1 00:05:37.823 --rc geninfo_all_blocks=1 00:05:37.823 --rc geninfo_unexecuted_blocks=1 00:05:37.823 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.823 ' 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.823 --rc genhtml_branch_coverage=1 00:05:37.823 --rc genhtml_function_coverage=1 00:05:37.823 --rc genhtml_legend=1 00:05:37.823 --rc geninfo_all_blocks=1 00:05:37.823 --rc geninfo_unexecuted_blocks=1 00:05:37.823 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.823 ' 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.823 --rc genhtml_branch_coverage=1 00:05:37.823 --rc genhtml_function_coverage=1 00:05:37.823 --rc genhtml_legend=1 00:05:37.823 --rc geninfo_all_blocks=1 00:05:37.823 --rc geninfo_unexecuted_blocks=1 00:05:37.823 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.823 ' 00:05:37.823 13:12:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.823 --rc genhtml_branch_coverage=1 00:05:37.823 --rc genhtml_function_coverage=1 00:05:37.824 --rc genhtml_legend=1 00:05:37.824 --rc geninfo_all_blocks=1 00:05:37.824 --rc geninfo_unexecuted_blocks=1 00:05:37.824 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.824 ' 00:05:37.824 13:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:37.824 13:12:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.824 13:12:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.824 13:12:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.824 13:12:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.824 13:12:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.824 13:12:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.824 13:12:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.824 13:12:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:37.824 13:12:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.824 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.824 13:12:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.824 13:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:37.824 13:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:37.824 INFO: launching applications... 00:05:37.824 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=284347 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.824 Waiting for target to run... 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 284347 /var/tmp/spdk_tgt.sock 00:05:37.824 13:12:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.824 13:12:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 284347 ']' 00:05:37.824 13:12:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.824 13:12:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.825 13:12:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.825 13:12:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.825 13:12:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.825 [2024-12-09 13:12:40.030653] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:37.825 [2024-12-09 13:12:40.030722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284347 ] 00:05:38.394 [2024-12-09 13:12:40.484192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.394 [2024-12-09 13:12:40.540915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.652 13:12:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.652 13:12:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:38.652 13:12:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:38.652 00:05:38.652 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:38.652 INFO: shutting down applications... 00:05:38.653 13:12:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 284347 ]] 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 284347 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 284347 00:05:38.653 13:12:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 284347 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.221 13:12:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.221 SPDK target shutdown done 00:05:39.221 13:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.221 Success 00:05:39.221 00:05:39.221 real 0m1.600s 00:05:39.221 user 0m1.173s 00:05:39.221 sys 0m0.610s 00:05:39.221 13:12:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.221 13:12:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.221 ************************************ 00:05:39.221 END TEST json_config_extra_key 00:05:39.221 ************************************ 00:05:39.221 13:12:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.221 13:12:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.221 13:12:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.221 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.481 ************************************ 00:05:39.481 START TEST alias_rpc 00:05:39.481 ************************************ 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.481 * Looking for test storage... 00:05:39.481 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.481 13:12:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.481 --rc genhtml_branch_coverage=1 00:05:39.481 --rc genhtml_function_coverage=1 00:05:39.481 --rc genhtml_legend=1 00:05:39.481 --rc geninfo_all_blocks=1 00:05:39.481 --rc geninfo_unexecuted_blocks=1 00:05:39.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.481 ' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.481 --rc genhtml_branch_coverage=1 00:05:39.481 --rc genhtml_function_coverage=1 00:05:39.481 --rc genhtml_legend=1 00:05:39.481 --rc geninfo_all_blocks=1 00:05:39.481 --rc geninfo_unexecuted_blocks=1 00:05:39.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.481 ' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.481 --rc genhtml_branch_coverage=1 00:05:39.481 --rc genhtml_function_coverage=1 00:05:39.481 --rc genhtml_legend=1 00:05:39.481 --rc geninfo_all_blocks=1 00:05:39.481 --rc geninfo_unexecuted_blocks=1 00:05:39.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.481 ' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.481 --rc genhtml_branch_coverage=1 00:05:39.481 --rc genhtml_function_coverage=1 00:05:39.481 --rc genhtml_legend=1 00:05:39.481 --rc geninfo_all_blocks=1 00:05:39.481 --rc geninfo_unexecuted_blocks=1 00:05:39.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.481 ' 00:05:39.481 13:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.481 13:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=284682 00:05:39.481 13:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.481 13:12:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 284682 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 284682 ']' 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.481 13:12:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.481 [2024-12-09 13:12:41.699133] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:39.481 [2024-12-09 13:12:41.699223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284682 ] 00:05:39.741 [2024-12-09 13:12:41.783515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.741 [2024-12-09 13:12:41.824935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.000 13:12:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.000 13:12:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.000 13:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:40.259 13:12:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 284682 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 284682 ']' 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 284682 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284682 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284682' 00:05:40.259 killing process with pid 284682 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 284682 00:05:40.259 13:12:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 284682 00:05:40.518 00:05:40.518 real 0m1.160s 00:05:40.518 user 0m1.151s 00:05:40.518 sys 0m0.470s 00:05:40.518 13:12:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.518 13:12:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.518 ************************************ 00:05:40.518 END TEST alias_rpc 00:05:40.518 ************************************ 00:05:40.518 13:12:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:40.518 13:12:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:40.518 13:12:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.518 13:12:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.518 13:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.518 ************************************ 00:05:40.518 START TEST spdkcli_tcp 00:05:40.518 ************************************ 00:05:40.518 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:40.778 * Looking for test storage... 00:05:40.778 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.778 13:12:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.778 --rc genhtml_branch_coverage=1 00:05:40.778 --rc genhtml_function_coverage=1 00:05:40.778 --rc genhtml_legend=1 00:05:40.778 --rc geninfo_all_blocks=1 00:05:40.778 --rc geninfo_unexecuted_blocks=1 00:05:40.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.778 ' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.778 --rc genhtml_branch_coverage=1 00:05:40.778 --rc genhtml_function_coverage=1 00:05:40.778 --rc genhtml_legend=1 00:05:40.778 --rc geninfo_all_blocks=1 00:05:40.778 --rc geninfo_unexecuted_blocks=1 00:05:40.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.778 ' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.778 --rc genhtml_branch_coverage=1 00:05:40.778 --rc genhtml_function_coverage=1 00:05:40.778 --rc genhtml_legend=1 00:05:40.778 --rc geninfo_all_blocks=1 00:05:40.778 --rc geninfo_unexecuted_blocks=1 00:05:40.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.778 ' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.778 --rc genhtml_branch_coverage=1 00:05:40.778 --rc genhtml_function_coverage=1 00:05:40.778 --rc genhtml_legend=1 00:05:40.778 --rc geninfo_all_blocks=1 00:05:40.778 --rc geninfo_unexecuted_blocks=1 00:05:40.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.778 ' 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=285003 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:40.778 13:12:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 285003 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 285003 ']' 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.778 13:12:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.778 [2024-12-09 13:12:42.954632] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:40.779 [2024-12-09 13:12:42.954693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285003 ] 00:05:41.037 [2024-12-09 13:12:43.039410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.037 [2024-12-09 13:12:43.082247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.037 [2024-12-09 13:12:43.082247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.295 13:12:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.295 13:12:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:41.295 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=285013 00:05:41.295 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:41.295 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:41.295 [ 00:05:41.295 "spdk_get_version", 00:05:41.295 "rpc_get_methods", 00:05:41.295 "notify_get_notifications", 00:05:41.295 "notify_get_types", 00:05:41.295 "trace_get_info", 00:05:41.295 "trace_get_tpoint_group_mask", 00:05:41.295 "trace_disable_tpoint_group", 00:05:41.295 "trace_enable_tpoint_group", 00:05:41.295 "trace_clear_tpoint_mask", 00:05:41.295 "trace_set_tpoint_mask", 00:05:41.295 "fsdev_set_opts", 00:05:41.295 "fsdev_get_opts", 00:05:41.295 "framework_get_pci_devices", 00:05:41.295 "framework_get_config", 00:05:41.295 "framework_get_subsystems", 00:05:41.295 "vfu_tgt_set_base_path", 00:05:41.295 "keyring_get_keys", 00:05:41.295 "iobuf_get_stats", 00:05:41.295 "iobuf_set_options", 00:05:41.295 "sock_get_default_impl", 00:05:41.295 "sock_set_default_impl", 00:05:41.295 "sock_impl_set_options", 00:05:41.295 "sock_impl_get_options", 00:05:41.295 "vmd_rescan", 00:05:41.295 "vmd_remove_device", 00:05:41.295 "vmd_enable", 00:05:41.295 "accel_get_stats", 00:05:41.295 "accel_set_options", 00:05:41.295 "accel_set_driver", 00:05:41.295 "accel_crypto_key_destroy", 00:05:41.295 "accel_crypto_keys_get", 00:05:41.295 "accel_crypto_key_create", 00:05:41.295 "accel_assign_opc", 00:05:41.295 "accel_get_module_info", 00:05:41.295 "accel_get_opc_assignments", 00:05:41.295 "bdev_get_histogram", 00:05:41.295 "bdev_enable_histogram", 00:05:41.295 "bdev_set_qos_limit", 00:05:41.295 "bdev_set_qd_sampling_period", 00:05:41.295 "bdev_get_bdevs", 00:05:41.295 "bdev_reset_iostat", 00:05:41.295 "bdev_get_iostat", 00:05:41.295 "bdev_examine", 00:05:41.295 "bdev_wait_for_examine", 00:05:41.295 "bdev_set_options", 00:05:41.295 "scsi_get_devices", 00:05:41.295 "thread_set_cpumask", 00:05:41.295 "scheduler_set_options", 00:05:41.295 "framework_get_governor", 00:05:41.295 "framework_get_scheduler", 00:05:41.295 "framework_set_scheduler", 00:05:41.295 "framework_get_reactors", 00:05:41.295 "thread_get_io_channels", 00:05:41.295 "thread_get_pollers", 00:05:41.295 "thread_get_stats", 00:05:41.295 "framework_monitor_context_switch", 00:05:41.295 "spdk_kill_instance", 00:05:41.295 "log_enable_timestamps", 00:05:41.295 "log_get_flags", 00:05:41.295 "log_clear_flag", 00:05:41.295 "log_set_flag", 00:05:41.295 "log_get_level", 00:05:41.295 "log_set_level", 00:05:41.295 "log_get_print_level", 00:05:41.295 "log_set_print_level", 00:05:41.295 "framework_enable_cpumask_locks", 00:05:41.295 "framework_disable_cpumask_locks", 00:05:41.295 "framework_wait_init", 00:05:41.295 "framework_start_init", 00:05:41.295 "virtio_blk_create_transport", 00:05:41.295 "virtio_blk_get_transports", 00:05:41.295 "vhost_controller_set_coalescing", 00:05:41.295 "vhost_get_controllers", 00:05:41.295 "vhost_delete_controller", 00:05:41.295 "vhost_create_blk_controller", 00:05:41.295 "vhost_scsi_controller_remove_target", 00:05:41.295 "vhost_scsi_controller_add_target", 00:05:41.295 "vhost_start_scsi_controller", 00:05:41.295 "vhost_create_scsi_controller", 00:05:41.295 "ublk_recover_disk", 00:05:41.295 "ublk_get_disks", 00:05:41.295 "ublk_stop_disk", 00:05:41.295 "ublk_start_disk", 00:05:41.295 "ublk_destroy_target", 00:05:41.295 "ublk_create_target", 00:05:41.295 "nbd_get_disks", 00:05:41.295 "nbd_stop_disk", 00:05:41.295 "nbd_start_disk", 00:05:41.295 "env_dpdk_get_mem_stats", 00:05:41.295 "nvmf_stop_mdns_prr", 00:05:41.295 "nvmf_publish_mdns_prr", 00:05:41.295 "nvmf_subsystem_get_listeners", 00:05:41.295 "nvmf_subsystem_get_qpairs", 00:05:41.295 "nvmf_subsystem_get_controllers", 00:05:41.295 "nvmf_get_stats", 00:05:41.295 "nvmf_get_transports", 00:05:41.295 "nvmf_create_transport", 00:05:41.295 "nvmf_get_targets", 00:05:41.295 "nvmf_delete_target", 00:05:41.295 "nvmf_create_target", 00:05:41.295 "nvmf_subsystem_allow_any_host", 00:05:41.295 "nvmf_subsystem_set_keys", 00:05:41.295 "nvmf_subsystem_remove_host", 00:05:41.295 "nvmf_subsystem_add_host", 00:05:41.295 "nvmf_ns_remove_host", 00:05:41.295 "nvmf_ns_add_host", 00:05:41.295 "nvmf_subsystem_remove_ns", 00:05:41.295 "nvmf_subsystem_set_ns_ana_group", 00:05:41.295 "nvmf_subsystem_add_ns", 00:05:41.295 "nvmf_subsystem_listener_set_ana_state", 00:05:41.295 "nvmf_discovery_get_referrals", 00:05:41.295 "nvmf_discovery_remove_referral", 00:05:41.295 "nvmf_discovery_add_referral", 00:05:41.295 "nvmf_subsystem_remove_listener", 00:05:41.295 "nvmf_subsystem_add_listener", 00:05:41.295 "nvmf_delete_subsystem", 00:05:41.295 "nvmf_create_subsystem", 00:05:41.295 "nvmf_get_subsystems", 00:05:41.295 "nvmf_set_crdt", 00:05:41.295 "nvmf_set_config", 00:05:41.295 "nvmf_set_max_subsystems", 00:05:41.295 "iscsi_get_histogram", 00:05:41.295 "iscsi_enable_histogram", 00:05:41.295 "iscsi_set_options", 00:05:41.295 "iscsi_get_auth_groups", 00:05:41.295 "iscsi_auth_group_remove_secret", 00:05:41.295 "iscsi_auth_group_add_secret", 00:05:41.295 "iscsi_delete_auth_group", 00:05:41.295 "iscsi_create_auth_group", 00:05:41.295 "iscsi_set_discovery_auth", 00:05:41.295 "iscsi_get_options", 00:05:41.295 "iscsi_target_node_request_logout", 00:05:41.295 "iscsi_target_node_set_redirect", 00:05:41.295 "iscsi_target_node_set_auth", 00:05:41.295 "iscsi_target_node_add_lun", 00:05:41.295 "iscsi_get_stats", 00:05:41.295 "iscsi_get_connections", 00:05:41.295 "iscsi_portal_group_set_auth", 00:05:41.295 "iscsi_start_portal_group", 00:05:41.295 "iscsi_delete_portal_group", 00:05:41.295 "iscsi_create_portal_group", 00:05:41.295 "iscsi_get_portal_groups", 00:05:41.295 "iscsi_delete_target_node", 00:05:41.295 "iscsi_target_node_remove_pg_ig_maps", 00:05:41.295 "iscsi_target_node_add_pg_ig_maps", 00:05:41.295 "iscsi_create_target_node", 00:05:41.295 "iscsi_get_target_nodes", 00:05:41.295 "iscsi_delete_initiator_group", 00:05:41.295 "iscsi_initiator_group_remove_initiators", 00:05:41.295 "iscsi_initiator_group_add_initiators", 00:05:41.295 "iscsi_create_initiator_group", 00:05:41.295 "iscsi_get_initiator_groups", 00:05:41.295 "fsdev_aio_delete", 00:05:41.295 "fsdev_aio_create", 00:05:41.295 "keyring_linux_set_options", 00:05:41.295 "keyring_file_remove_key", 00:05:41.295 "keyring_file_add_key", 00:05:41.295 "vfu_virtio_create_fs_endpoint", 00:05:41.295 "vfu_virtio_create_scsi_endpoint", 00:05:41.295 "vfu_virtio_scsi_remove_target", 00:05:41.296 "vfu_virtio_scsi_add_target", 00:05:41.296 "vfu_virtio_create_blk_endpoint", 00:05:41.296 "vfu_virtio_delete_endpoint", 00:05:41.296 "iaa_scan_accel_module", 00:05:41.296 "dsa_scan_accel_module", 00:05:41.296 "ioat_scan_accel_module", 00:05:41.296 "accel_error_inject_error", 00:05:41.296 "bdev_iscsi_delete", 00:05:41.296 "bdev_iscsi_create", 00:05:41.296 "bdev_iscsi_set_options", 00:05:41.296 "bdev_virtio_attach_controller", 00:05:41.296 "bdev_virtio_scsi_get_devices", 00:05:41.296 "bdev_virtio_detach_controller", 00:05:41.296 "bdev_virtio_blk_set_hotplug", 00:05:41.296 "bdev_ftl_set_property", 00:05:41.296 "bdev_ftl_get_properties", 00:05:41.296 "bdev_ftl_get_stats", 00:05:41.296 "bdev_ftl_unmap", 00:05:41.296 "bdev_ftl_unload", 00:05:41.296 "bdev_ftl_delete", 00:05:41.296 "bdev_ftl_load", 00:05:41.296 "bdev_ftl_create", 00:05:41.296 "bdev_aio_delete", 00:05:41.296 "bdev_aio_rescan", 00:05:41.296 "bdev_aio_create", 00:05:41.296 "blobfs_create", 00:05:41.296 "blobfs_detect", 00:05:41.296 "blobfs_set_cache_size", 00:05:41.296 "bdev_zone_block_delete", 00:05:41.296 "bdev_zone_block_create", 00:05:41.296 "bdev_delay_delete", 00:05:41.296 "bdev_delay_create", 00:05:41.296 "bdev_delay_update_latency", 00:05:41.296 "bdev_split_delete", 00:05:41.296 "bdev_split_create", 00:05:41.296 "bdev_error_inject_error", 00:05:41.296 "bdev_error_delete", 00:05:41.296 "bdev_error_create", 00:05:41.296 "bdev_raid_set_options", 00:05:41.296 "bdev_raid_remove_base_bdev", 00:05:41.296 "bdev_raid_add_base_bdev", 00:05:41.296 "bdev_raid_delete", 00:05:41.296 "bdev_raid_create", 00:05:41.296 "bdev_raid_get_bdevs", 00:05:41.296 "bdev_lvol_set_parent_bdev", 00:05:41.296 "bdev_lvol_set_parent", 00:05:41.296 "bdev_lvol_check_shallow_copy", 00:05:41.296 "bdev_lvol_start_shallow_copy", 00:05:41.296 "bdev_lvol_grow_lvstore", 00:05:41.296 "bdev_lvol_get_lvols", 00:05:41.296 "bdev_lvol_get_lvstores", 00:05:41.296 "bdev_lvol_delete", 00:05:41.296 "bdev_lvol_set_read_only", 00:05:41.296 "bdev_lvol_resize", 00:05:41.296 "bdev_lvol_decouple_parent", 00:05:41.296 "bdev_lvol_inflate", 00:05:41.296 "bdev_lvol_rename", 00:05:41.296 "bdev_lvol_clone_bdev", 00:05:41.296 "bdev_lvol_clone", 00:05:41.296 "bdev_lvol_snapshot", 00:05:41.296 "bdev_lvol_create", 00:05:41.296 "bdev_lvol_delete_lvstore", 00:05:41.296 "bdev_lvol_rename_lvstore", 00:05:41.296 "bdev_lvol_create_lvstore", 00:05:41.296 "bdev_passthru_delete", 00:05:41.296 "bdev_passthru_create", 00:05:41.296 "bdev_nvme_cuse_unregister", 00:05:41.296 "bdev_nvme_cuse_register", 00:05:41.296 "bdev_opal_new_user", 00:05:41.296 "bdev_opal_set_lock_state", 00:05:41.296 "bdev_opal_delete", 00:05:41.296 "bdev_opal_get_info", 00:05:41.296 "bdev_opal_create", 00:05:41.296 "bdev_nvme_opal_revert", 00:05:41.296 "bdev_nvme_opal_init", 00:05:41.296 "bdev_nvme_send_cmd", 00:05:41.296 "bdev_nvme_set_keys", 00:05:41.296 "bdev_nvme_get_path_iostat", 00:05:41.296 "bdev_nvme_get_mdns_discovery_info", 00:05:41.296 "bdev_nvme_stop_mdns_discovery", 00:05:41.296 "bdev_nvme_start_mdns_discovery", 00:05:41.296 "bdev_nvme_set_multipath_policy", 00:05:41.296 "bdev_nvme_set_preferred_path", 00:05:41.296 "bdev_nvme_get_io_paths", 00:05:41.296 "bdev_nvme_remove_error_injection", 00:05:41.296 "bdev_nvme_add_error_injection", 00:05:41.296 "bdev_nvme_get_discovery_info", 00:05:41.296 "bdev_nvme_stop_discovery", 00:05:41.296 "bdev_nvme_start_discovery", 00:05:41.296 "bdev_nvme_get_controller_health_info", 00:05:41.296 "bdev_nvme_disable_controller", 00:05:41.296 "bdev_nvme_enable_controller", 00:05:41.296 "bdev_nvme_reset_controller", 00:05:41.296 "bdev_nvme_get_transport_statistics", 00:05:41.296 "bdev_nvme_apply_firmware", 00:05:41.296 "bdev_nvme_detach_controller", 00:05:41.296 "bdev_nvme_get_controllers", 00:05:41.296 "bdev_nvme_attach_controller", 00:05:41.296 "bdev_nvme_set_hotplug", 00:05:41.296 "bdev_nvme_set_options", 00:05:41.296 "bdev_null_resize", 00:05:41.296 "bdev_null_delete", 00:05:41.296 "bdev_null_create", 00:05:41.296 "bdev_malloc_delete", 00:05:41.296 "bdev_malloc_create" 00:05:41.296 ] 00:05:41.296 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:41.296 13:12:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.296 13:12:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.296 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:41.296 13:12:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 285003 00:05:41.296 13:12:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 285003 ']' 00:05:41.296 13:12:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 285003 00:05:41.296 13:12:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285003 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285003' 00:05:41.554 killing process with pid 285003 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 285003 00:05:41.554 13:12:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 285003 00:05:41.813 00:05:41.813 real 0m1.172s 00:05:41.813 user 0m1.921s 00:05:41.813 sys 0m0.498s 00:05:41.813 13:12:43 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.813 13:12:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.813 ************************************ 00:05:41.813 END TEST spdkcli_tcp 00:05:41.813 ************************************ 00:05:41.813 13:12:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.813 13:12:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.813 13:12:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.813 13:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.813 ************************************ 00:05:41.813 START TEST dpdk_mem_utility 00:05:41.813 ************************************ 00:05:41.813 13:12:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.072 * Looking for test storage... 00:05:42.072 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.072 13:12:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.072 --rc genhtml_branch_coverage=1 00:05:42.072 --rc genhtml_function_coverage=1 00:05:42.072 --rc genhtml_legend=1 00:05:42.072 --rc geninfo_all_blocks=1 00:05:42.072 --rc geninfo_unexecuted_blocks=1 00:05:42.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.072 ' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.072 --rc genhtml_branch_coverage=1 00:05:42.072 --rc genhtml_function_coverage=1 00:05:42.072 --rc genhtml_legend=1 00:05:42.072 --rc geninfo_all_blocks=1 00:05:42.072 --rc geninfo_unexecuted_blocks=1 00:05:42.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.072 ' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.072 --rc genhtml_branch_coverage=1 00:05:42.072 --rc genhtml_function_coverage=1 00:05:42.072 --rc genhtml_legend=1 00:05:42.072 --rc geninfo_all_blocks=1 00:05:42.072 --rc geninfo_unexecuted_blocks=1 00:05:42.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.072 ' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.072 --rc genhtml_branch_coverage=1 00:05:42.072 --rc genhtml_function_coverage=1 00:05:42.072 --rc genhtml_legend=1 00:05:42.072 --rc geninfo_all_blocks=1 00:05:42.072 --rc geninfo_unexecuted_blocks=1 00:05:42.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.072 ' 00:05:42.072 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:42.072 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=285340 00:05:42.072 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.072 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 285340 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 285340 ']' 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.072 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.072 [2024-12-09 13:12:44.199141] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:42.072 [2024-12-09 13:12:44.199202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285340 ] 00:05:42.072 [2024-12-09 13:12:44.282851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.331 [2024-12-09 13:12:44.325561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.331 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.331 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:42.331 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.331 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.331 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.331 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.331 { 00:05:42.331 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.331 } 00:05:42.331 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.331 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:42.590 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:42.590 1 heaps totaling size 818.000000 MiB 00:05:42.590 size: 818.000000 MiB heap id: 0 00:05:42.590 end heaps---------- 00:05:42.590 9 mempools totaling size 603.782043 MiB 00:05:42.590 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.590 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.590 size: 100.555481 MiB name: bdev_io_285340 00:05:42.590 size: 50.003479 MiB name: msgpool_285340 00:05:42.590 size: 36.509338 MiB name: fsdev_io_285340 00:05:42.590 size: 21.763794 MiB name: PDU_Pool 00:05:42.590 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.590 size: 4.133484 MiB name: evtpool_285340 00:05:42.590 size: 0.026123 MiB name: Session_Pool 00:05:42.590 end mempools------- 00:05:42.590 6 memzones totaling size 4.142822 MiB 00:05:42.590 size: 1.000366 MiB name: RG_ring_0_285340 00:05:42.590 size: 1.000366 MiB name: RG_ring_1_285340 00:05:42.590 size: 1.000366 MiB name: RG_ring_4_285340 00:05:42.590 size: 1.000366 MiB name: RG_ring_5_285340 00:05:42.590 size: 0.125366 MiB name: RG_ring_2_285340 00:05:42.590 size: 0.015991 MiB name: RG_ring_3_285340 00:05:42.590 end memzones------- 00:05:42.590 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.590 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:42.590 list of free elements. size: 10.852478 MiB 00:05:42.590 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:42.590 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:42.590 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:42.590 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:42.590 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:42.590 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:42.590 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:42.590 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:42.590 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:42.591 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:42.591 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:42.591 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:42.591 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:42.591 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:42.591 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:42.591 list of standard malloc elements. size: 199.218628 MiB 00:05:42.591 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:42.591 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:42.591 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:42.591 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:42.591 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:42.591 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.591 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:42.591 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.591 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:42.591 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:42.591 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:42.591 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:42.591 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:42.591 list of memzone associated elements. size: 607.928894 MiB 00:05:42.591 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:42.591 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.591 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:42.591 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.591 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:42.591 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_285340_0 00:05:42.591 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:42.591 associated memzone info: size: 48.002930 MiB name: MP_msgpool_285340_0 00:05:42.591 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:42.591 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_285340_0 00:05:42.591 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:42.591 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.591 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:42.591 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.591 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:42.591 associated memzone info: size: 3.000122 MiB name: MP_evtpool_285340_0 00:05:42.591 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:42.591 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_285340 00:05:42.591 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.591 associated memzone info: size: 1.007996 MiB name: MP_evtpool_285340 00:05:42.591 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:42.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.591 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:42.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.591 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:42.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.591 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:42.591 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.591 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:42.591 associated memzone info: size: 1.000366 MiB name: RG_ring_0_285340 00:05:42.591 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:42.591 associated memzone info: size: 1.000366 MiB name: RG_ring_1_285340 00:05:42.591 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:42.591 associated memzone info: size: 1.000366 MiB name: RG_ring_4_285340 00:05:42.591 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:42.591 associated memzone info: size: 1.000366 MiB name: RG_ring_5_285340 00:05:42.591 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:42.591 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_285340 00:05:42.591 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:42.591 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_285340 00:05:42.591 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:42.591 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.591 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:42.591 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.591 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:42.591 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.591 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:42.591 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_285340 00:05:42.591 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:42.591 associated memzone info: size: 0.125366 MiB name: RG_ring_2_285340 00:05:42.591 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:42.591 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.591 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:42.591 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.591 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:42.591 associated memzone info: size: 0.015991 MiB name: RG_ring_3_285340 00:05:42.591 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:42.591 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.591 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:42.591 associated memzone info: size: 0.000183 MiB name: MP_msgpool_285340 00:05:42.591 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:42.591 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_285340 00:05:42.591 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:42.591 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_285340 00:05:42.591 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:42.591 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.591 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.591 13:12:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 285340 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 285340 ']' 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 285340 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285340 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285340' 00:05:42.591 killing process with pid 285340 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 285340 00:05:42.591 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 285340 00:05:42.851 00:05:42.851 real 0m1.015s 00:05:42.851 user 0m0.910s 00:05:42.851 sys 0m0.441s 00:05:42.851 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.851 13:12:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.851 ************************************ 00:05:42.851 END TEST dpdk_mem_utility 00:05:42.851 ************************************ 00:05:42.851 13:12:45 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:42.851 13:12:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.851 13:12:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.851 13:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.851 ************************************ 00:05:42.851 START TEST event 00:05:42.851 ************************************ 00:05:42.851 13:12:45 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:43.109 * Looking for test storage... 00:05:43.110 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.110 13:12:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.110 13:12:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.110 13:12:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.110 13:12:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.110 13:12:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.110 13:12:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.110 13:12:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.110 13:12:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.110 13:12:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.110 13:12:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.110 13:12:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.110 13:12:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:43.110 13:12:45 event -- scripts/common.sh@345 -- # : 1 00:05:43.110 13:12:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.110 13:12:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.110 13:12:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:43.110 13:12:45 event -- scripts/common.sh@353 -- # local d=1 00:05:43.110 13:12:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.110 13:12:45 event -- scripts/common.sh@355 -- # echo 1 00:05:43.110 13:12:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.110 13:12:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:43.110 13:12:45 event -- scripts/common.sh@353 -- # local d=2 00:05:43.110 13:12:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.110 13:12:45 event -- scripts/common.sh@355 -- # echo 2 00:05:43.110 13:12:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.110 13:12:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.110 13:12:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.110 13:12:45 event -- scripts/common.sh@368 -- # return 0 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.110 --rc genhtml_branch_coverage=1 00:05:43.110 --rc genhtml_function_coverage=1 00:05:43.110 --rc genhtml_legend=1 00:05:43.110 --rc geninfo_all_blocks=1 00:05:43.110 --rc geninfo_unexecuted_blocks=1 00:05:43.110 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.110 ' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.110 --rc genhtml_branch_coverage=1 00:05:43.110 --rc genhtml_function_coverage=1 00:05:43.110 --rc genhtml_legend=1 00:05:43.110 --rc geninfo_all_blocks=1 00:05:43.110 --rc geninfo_unexecuted_blocks=1 00:05:43.110 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.110 ' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.110 --rc genhtml_branch_coverage=1 00:05:43.110 --rc genhtml_function_coverage=1 00:05:43.110 --rc genhtml_legend=1 00:05:43.110 --rc geninfo_all_blocks=1 00:05:43.110 --rc geninfo_unexecuted_blocks=1 00:05:43.110 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.110 ' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.110 --rc genhtml_branch_coverage=1 00:05:43.110 --rc genhtml_function_coverage=1 00:05:43.110 --rc genhtml_legend=1 00:05:43.110 --rc geninfo_all_blocks=1 00:05:43.110 --rc geninfo_unexecuted_blocks=1 00:05:43.110 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.110 ' 00:05:43.110 13:12:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:43.110 13:12:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:43.110 13:12:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:43.110 13:12:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.110 13:12:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.110 ************************************ 00:05:43.110 START TEST event_perf 00:05:43.110 ************************************ 00:05:43.110 13:12:45 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:43.110 Running I/O for 1 seconds...[2024-12-09 13:12:45.327718] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:43.110 [2024-12-09 13:12:45.327830] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285615 ] 00:05:43.367 [2024-12-09 13:12:45.416036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.367 [2024-12-09 13:12:45.459513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.367 [2024-12-09 13:12:45.459633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.367 [2024-12-09 13:12:45.459665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.367 [2024-12-09 13:12:45.459666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.301 Running I/O for 1 seconds... 00:05:44.301 lcore 0: 194315 00:05:44.301 lcore 1: 194314 00:05:44.301 lcore 2: 194313 00:05:44.301 lcore 3: 194314 00:05:44.301 done. 00:05:44.301 00:05:44.301 real 0m1.189s 00:05:44.301 user 0m4.099s 00:05:44.301 sys 0m0.088s 00:05:44.301 13:12:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.301 13:12:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.301 ************************************ 00:05:44.301 END TEST event_perf 00:05:44.301 ************************************ 00:05:44.301 13:12:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:44.301 13:12:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.301 13:12:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.301 13:12:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.560 ************************************ 00:05:44.560 START TEST event_reactor 00:05:44.560 ************************************ 00:05:44.560 13:12:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:44.560 [2024-12-09 13:12:46.598160] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:44.560 [2024-12-09 13:12:46.598246] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285763 ] 00:05:44.560 [2024-12-09 13:12:46.686944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.560 [2024-12-09 13:12:46.729255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.938 test_start 00:05:45.938 oneshot 00:05:45.938 tick 100 00:05:45.938 tick 100 00:05:45.938 tick 250 00:05:45.938 tick 100 00:05:45.938 tick 100 00:05:45.938 tick 100 00:05:45.938 tick 250 00:05:45.938 tick 500 00:05:45.938 tick 100 00:05:45.938 tick 100 00:05:45.938 tick 250 00:05:45.938 tick 100 00:05:45.938 tick 100 00:05:45.938 test_end 00:05:45.938 00:05:45.938 real 0m1.183s 00:05:45.938 user 0m1.084s 00:05:45.938 sys 0m0.095s 00:05:45.938 13:12:47 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.938 13:12:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.938 ************************************ 00:05:45.938 END TEST event_reactor 00:05:45.938 ************************************ 00:05:45.938 13:12:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.938 13:12:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:45.938 13:12:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.938 13:12:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.938 ************************************ 00:05:45.938 START TEST event_reactor_perf 00:05:45.938 ************************************ 00:05:45.938 13:12:47 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.938 [2024-12-09 13:12:47.869956] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:45.938 [2024-12-09 13:12:47.870057] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285993 ] 00:05:45.938 [2024-12-09 13:12:47.960254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.938 [2024-12-09 13:12:48.003425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.876 test_start 00:05:46.876 test_end 00:05:46.876 Performance: 953177 events per second 00:05:46.876 00:05:46.876 real 0m1.187s 00:05:46.876 user 0m1.089s 00:05:46.876 sys 0m0.094s 00:05:46.876 13:12:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.876 13:12:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.876 ************************************ 00:05:46.876 END TEST event_reactor_perf 00:05:46.876 ************************************ 00:05:46.876 13:12:49 event -- event/event.sh@49 -- # uname -s 00:05:46.876 13:12:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.876 13:12:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.876 13:12:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.876 13:12:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.876 13:12:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.136 ************************************ 00:05:47.136 START TEST event_scheduler 00:05:47.136 ************************************ 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:47.136 * Looking for test storage... 00:05:47.136 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.136 13:12:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.136 --rc genhtml_branch_coverage=1 00:05:47.136 --rc genhtml_function_coverage=1 00:05:47.136 --rc genhtml_legend=1 00:05:47.136 --rc geninfo_all_blocks=1 00:05:47.136 --rc geninfo_unexecuted_blocks=1 00:05:47.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.136 ' 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.136 --rc genhtml_branch_coverage=1 00:05:47.136 --rc genhtml_function_coverage=1 00:05:47.136 --rc genhtml_legend=1 00:05:47.136 --rc geninfo_all_blocks=1 00:05:47.136 --rc geninfo_unexecuted_blocks=1 00:05:47.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.136 ' 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.136 --rc genhtml_branch_coverage=1 00:05:47.136 --rc genhtml_function_coverage=1 00:05:47.136 --rc genhtml_legend=1 00:05:47.136 --rc geninfo_all_blocks=1 00:05:47.136 --rc geninfo_unexecuted_blocks=1 00:05:47.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.136 ' 00:05:47.136 13:12:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.136 --rc genhtml_branch_coverage=1 00:05:47.136 --rc genhtml_function_coverage=1 00:05:47.136 --rc genhtml_legend=1 00:05:47.136 --rc geninfo_all_blocks=1 00:05:47.136 --rc geninfo_unexecuted_blocks=1 00:05:47.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.136 ' 00:05:47.136 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.136 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=286308 00:05:47.136 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.137 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.137 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 286308 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 286308 ']' 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.137 13:12:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.137 [2024-12-09 13:12:49.354175] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:47.137 [2024-12-09 13:12:49.354243] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286308 ] 00:05:47.396 [2024-12-09 13:12:49.441743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.396 [2024-12-09 13:12:49.487411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.396 [2024-12-09 13:12:49.487522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.396 [2024-12-09 13:12:49.487638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.396 [2024-12-09 13:12:49.487640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.396 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.396 [2024-12-09 13:12:49.532368] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:47.396 [2024-12-09 13:12:49.532388] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.396 [2024-12-09 13:12:49.532399] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.396 [2024-12-09 13:12:49.532407] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.396 [2024-12-09 13:12:49.532414] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.396 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.396 [2024-12-09 13:12:49.607213] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.396 13:12:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.396 13:12:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.655 ************************************ 00:05:47.655 START TEST scheduler_create_thread 00:05:47.655 ************************************ 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.655 2 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.655 3 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.655 4 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.655 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 5 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 6 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 7 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 8 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 9 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 10 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.656 13:12:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.030 13:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.030 13:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.030 13:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.030 13:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.030 13:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.403 13:12:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.403 00:05:50.403 real 0m2.620s 00:05:50.403 user 0m0.025s 00:05:50.403 sys 0m0.006s 00:05:50.403 13:12:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.403 13:12:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.403 ************************************ 00:05:50.403 END TEST scheduler_create_thread 00:05:50.403 ************************************ 00:05:50.403 13:12:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.403 13:12:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 286308 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 286308 ']' 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 286308 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286308 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286308' 00:05:50.403 killing process with pid 286308 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 286308 00:05:50.403 13:12:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 286308 00:05:50.661 [2024-12-09 13:12:52.749385] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.920 00:05:50.920 real 0m3.782s 00:05:50.920 user 0m5.614s 00:05:50.920 sys 0m0.446s 00:05:50.920 13:12:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.920 13:12:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.920 ************************************ 00:05:50.920 END TEST event_scheduler 00:05:50.920 ************************************ 00:05:50.920 13:12:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.920 13:12:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.920 13:12:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.920 13:12:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.920 13:12:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.920 ************************************ 00:05:50.920 START TEST app_repeat 00:05:50.920 ************************************ 00:05:50.920 13:12:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=287064 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.920 13:12:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.921 13:12:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 287064' 00:05:50.921 Process app_repeat pid: 287064 00:05:50.921 13:12:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.921 13:12:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.921 spdk_app_start Round 0 00:05:50.921 13:12:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287064 /var/tmp/spdk-nbd.sock 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 287064 ']' 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.921 13:12:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.921 [2024-12-09 13:12:53.032435] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:05:50.921 [2024-12-09 13:12:53.032519] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287064 ] 00:05:50.921 [2024-12-09 13:12:53.123874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.179 [2024-12-09 13:12:53.168080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.179 [2024-12-09 13:12:53.168081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.179 13:12:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.179 13:12:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.179 13:12:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.436 Malloc0 00:05:51.436 13:12:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.436 Malloc1 00:05:51.436 13:12:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.436 13:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.694 /dev/nbd0 00:05:51.694 13:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.694 13:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.694 1+0 records in 00:05:51.694 1+0 records out 00:05:51.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218959 s, 18.7 MB/s 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.694 13:12:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.694 13:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.694 13:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.694 13:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.952 /dev/nbd1 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.952 1+0 records in 00:05:51.952 1+0 records out 00:05:51.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259367 s, 15.8 MB/s 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.952 13:12:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.952 13:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.210 { 00:05:52.210 "nbd_device": "/dev/nbd0", 00:05:52.210 "bdev_name": "Malloc0" 00:05:52.210 }, 00:05:52.210 { 00:05:52.210 "nbd_device": "/dev/nbd1", 00:05:52.210 "bdev_name": "Malloc1" 00:05:52.210 } 00:05:52.210 ]' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.210 { 00:05:52.210 "nbd_device": "/dev/nbd0", 00:05:52.210 "bdev_name": "Malloc0" 00:05:52.210 }, 00:05:52.210 { 00:05:52.210 "nbd_device": "/dev/nbd1", 00:05:52.210 "bdev_name": "Malloc1" 00:05:52.210 } 00:05:52.210 ]' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.210 /dev/nbd1' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.210 /dev/nbd1' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.210 256+0 records in 00:05:52.210 256+0 records out 00:05:52.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108027 s, 97.1 MB/s 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.210 13:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.469 256+0 records in 00:05:52.469 256+0 records out 00:05:52.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205812 s, 50.9 MB/s 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.469 256+0 records in 00:05:52.469 256+0 records out 00:05:52.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02135 s, 49.1 MB/s 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.469 13:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.470 13:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.727 13:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.985 13:12:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.985 13:12:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.243 13:12:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.501 [2024-12-09 13:12:55.522755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.501 [2024-12-09 13:12:55.559555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.501 [2024-12-09 13:12:55.559556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.501 [2024-12-09 13:12:55.598896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.501 [2024-12-09 13:12:55.598936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.779 spdk_app_start Round 1 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287064 /var/tmp/spdk-nbd.sock 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 287064 ']' 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.779 13:12:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.779 Malloc0 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.779 Malloc1 00:05:56.779 13:12:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.779 13:12:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.037 /dev/nbd0 00:05:57.037 13:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.037 13:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.037 1+0 records in 00:05:57.037 1+0 records out 00:05:57.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022774 s, 18.0 MB/s 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.037 13:12:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.037 13:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.037 13:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.037 13:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.295 /dev/nbd1 00:05:57.295 13:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.295 13:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.295 13:12:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.296 1+0 records in 00:05:57.296 1+0 records out 00:05:57.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254513 s, 16.1 MB/s 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.296 13:12:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.296 13:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.296 13:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.296 13:12:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.296 13:12:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.296 13:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.554 { 00:05:57.554 "nbd_device": "/dev/nbd0", 00:05:57.554 "bdev_name": "Malloc0" 00:05:57.554 }, 00:05:57.554 { 00:05:57.554 "nbd_device": "/dev/nbd1", 00:05:57.554 "bdev_name": "Malloc1" 00:05:57.554 } 00:05:57.554 ]' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.554 { 00:05:57.554 "nbd_device": "/dev/nbd0", 00:05:57.554 "bdev_name": "Malloc0" 00:05:57.554 }, 00:05:57.554 { 00:05:57.554 "nbd_device": "/dev/nbd1", 00:05:57.554 "bdev_name": "Malloc1" 00:05:57.554 } 00:05:57.554 ]' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.554 /dev/nbd1' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.554 /dev/nbd1' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.554 256+0 records in 00:05:57.554 256+0 records out 00:05:57.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110386 s, 95.0 MB/s 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.554 256+0 records in 00:05:57.554 256+0 records out 00:05:57.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203297 s, 51.6 MB/s 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.554 13:12:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.812 256+0 records in 00:05:57.812 256+0 records out 00:05:57.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021855 s, 48.0 MB/s 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.812 13:12:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.812 13:13:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.070 13:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.328 13:13:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.328 13:13:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.586 13:13:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.844 [2024-12-09 13:13:00.865276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.844 [2024-12-09 13:13:00.903572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.844 [2024-12-09 13:13:00.903572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.844 [2024-12-09 13:13:00.944160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.844 [2024-12-09 13:13:00.944204] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.124 13:13:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.124 13:13:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.124 spdk_app_start Round 2 00:06:02.124 13:13:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287064 /var/tmp/spdk-nbd.sock 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 287064 ']' 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.124 13:13:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.124 13:13:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.124 Malloc0 00:06:02.124 13:13:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.124 Malloc1 00:06:02.124 13:13:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.124 13:13:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.382 /dev/nbd0 00:06:02.382 13:13:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.382 13:13:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.382 1+0 records in 00:06:02.382 1+0 records out 00:06:02.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228631 s, 17.9 MB/s 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.382 13:13:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.382 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.382 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.382 13:13:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.639 /dev/nbd1 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.639 1+0 records in 00:06:02.639 1+0 records out 00:06:02.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241106 s, 17.0 MB/s 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.639 13:13:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.639 13:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.897 { 00:06:02.897 "nbd_device": "/dev/nbd0", 00:06:02.897 "bdev_name": "Malloc0" 00:06:02.897 }, 00:06:02.897 { 00:06:02.897 "nbd_device": "/dev/nbd1", 00:06:02.897 "bdev_name": "Malloc1" 00:06:02.897 } 00:06:02.897 ]' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.897 { 00:06:02.897 "nbd_device": "/dev/nbd0", 00:06:02.897 "bdev_name": "Malloc0" 00:06:02.897 }, 00:06:02.897 { 00:06:02.897 "nbd_device": "/dev/nbd1", 00:06:02.897 "bdev_name": "Malloc1" 00:06:02.897 } 00:06:02.897 ]' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.897 /dev/nbd1' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.897 /dev/nbd1' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.897 13:13:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.897 256+0 records in 00:06:02.897 256+0 records out 00:06:02.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011544 s, 90.8 MB/s 00:06:02.898 13:13:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.898 13:13:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.898 256+0 records in 00:06:02.898 256+0 records out 00:06:02.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197822 s, 53.0 MB/s 00:06:02.898 13:13:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.898 13:13:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.156 256+0 records in 00:06:03.156 256+0 records out 00:06:03.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222531 s, 47.1 MB/s 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.156 13:13:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.157 13:13:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.157 13:13:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.157 13:13:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.157 13:13:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.414 13:13:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.672 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.673 13:13:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.673 13:13:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.673 13:13:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.673 13:13:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.673 13:13:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.673 13:13:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.934 13:13:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.192 [2024-12-09 13:13:06.217840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.192 [2024-12-09 13:13:06.257330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.192 [2024-12-09 13:13:06.257331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.192 [2024-12-09 13:13:06.296734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.192 [2024-12-09 13:13:06.296778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.474 13:13:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 287064 /var/tmp/spdk-nbd.sock 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 287064 ']' 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.474 13:13:09 event.app_repeat -- event/event.sh@39 -- # killprocess 287064 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 287064 ']' 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 287064 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287064 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287064' 00:06:07.474 killing process with pid 287064 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 287064 00:06:07.474 13:13:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 287064 00:06:07.474 spdk_app_start is called in Round 0. 00:06:07.474 Shutdown signal received, stop current app iteration 00:06:07.474 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:06:07.474 spdk_app_start is called in Round 1. 00:06:07.474 Shutdown signal received, stop current app iteration 00:06:07.474 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:06:07.474 spdk_app_start is called in Round 2. 00:06:07.474 Shutdown signal received, stop current app iteration 00:06:07.474 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:06:07.475 spdk_app_start is called in Round 3. 00:06:07.475 Shutdown signal received, stop current app iteration 00:06:07.475 13:13:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.475 13:13:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:07.475 00:06:07.475 real 0m16.463s 00:06:07.475 user 0m35.569s 00:06:07.475 sys 0m3.174s 00:06:07.475 13:13:09 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.475 13:13:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.475 ************************************ 00:06:07.475 END TEST app_repeat 00:06:07.475 ************************************ 00:06:07.475 13:13:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.475 13:13:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.475 13:13:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.475 13:13:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.475 13:13:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.475 ************************************ 00:06:07.475 START TEST cpu_locks 00:06:07.475 ************************************ 00:06:07.475 13:13:09 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.475 * Looking for test storage... 00:06:07.475 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:07.475 13:13:09 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.475 13:13:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.475 13:13:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.735 13:13:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.735 --rc genhtml_branch_coverage=1 00:06:07.735 --rc genhtml_function_coverage=1 00:06:07.735 --rc genhtml_legend=1 00:06:07.735 --rc geninfo_all_blocks=1 00:06:07.735 --rc geninfo_unexecuted_blocks=1 00:06:07.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.735 ' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.735 --rc genhtml_branch_coverage=1 00:06:07.735 --rc genhtml_function_coverage=1 00:06:07.735 --rc genhtml_legend=1 00:06:07.735 --rc geninfo_all_blocks=1 00:06:07.735 --rc geninfo_unexecuted_blocks=1 00:06:07.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.735 ' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.735 --rc genhtml_branch_coverage=1 00:06:07.735 --rc genhtml_function_coverage=1 00:06:07.735 --rc genhtml_legend=1 00:06:07.735 --rc geninfo_all_blocks=1 00:06:07.735 --rc geninfo_unexecuted_blocks=1 00:06:07.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.735 ' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.735 --rc genhtml_branch_coverage=1 00:06:07.735 --rc genhtml_function_coverage=1 00:06:07.735 --rc genhtml_legend=1 00:06:07.735 --rc geninfo_all_blocks=1 00:06:07.735 --rc geninfo_unexecuted_blocks=1 00:06:07.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.735 ' 00:06:07.735 13:13:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.735 13:13:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.735 13:13:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.735 13:13:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.735 13:13:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.735 ************************************ 00:06:07.735 START TEST default_locks 00:06:07.735 ************************************ 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=290101 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 290101 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 290101 ']' 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.735 13:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.735 [2024-12-09 13:13:09.816036] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:07.735 [2024-12-09 13:13:09.816091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290101 ] 00:06:07.735 [2024-12-09 13:13:09.900293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.735 [2024-12-09 13:13:09.941913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.994 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.994 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:07.995 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 290101 00:06:07.995 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 290101 00:06:07.995 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.563 lslocks: write error 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 290101 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 290101 ']' 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 290101 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290101 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290101' 00:06:08.563 killing process with pid 290101 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 290101 00:06:08.563 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 290101 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 290101 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 290101 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 290101 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 290101 ']' 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.822 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (290101) - No such process 00:06:08.822 ERROR: process (pid: 290101) is no longer running 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.822 00:06:08.822 real 0m1.108s 00:06:08.822 user 0m1.058s 00:06:08.822 sys 0m0.532s 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.822 13:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.822 ************************************ 00:06:08.822 END TEST default_locks 00:06:08.822 ************************************ 00:06:08.822 13:13:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:08.822 13:13:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.822 13:13:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.822 13:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.822 ************************************ 00:06:08.822 START TEST default_locks_via_rpc 00:06:08.822 ************************************ 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=290349 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 290349 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 290349 ']' 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.822 13:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.822 [2024-12-09 13:13:11.004138] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:08.823 [2024-12-09 13:13:11.004214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290349 ] 00:06:09.081 [2024-12-09 13:13:11.092607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.081 [2024-12-09 13:13:11.134019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 290349 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 290349 00:06:09.340 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 290349 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 290349 ']' 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 290349 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290349 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290349' 00:06:09.600 killing process with pid 290349 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 290349 00:06:09.600 13:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 290349 00:06:10.168 00:06:10.168 real 0m1.132s 00:06:10.168 user 0m1.090s 00:06:10.168 sys 0m0.549s 00:06:10.168 13:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.168 13:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.168 ************************************ 00:06:10.168 END TEST default_locks_via_rpc 00:06:10.168 ************************************ 00:06:10.168 13:13:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.168 13:13:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.168 13:13:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.168 13:13:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.168 ************************************ 00:06:10.168 START TEST non_locking_app_on_locked_coremask 00:06:10.168 ************************************ 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=290647 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 290647 /var/tmp/spdk.sock 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 290647 ']' 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.168 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.168 [2024-12-09 13:13:12.213398] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:10.168 [2024-12-09 13:13:12.213454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290647 ] 00:06:10.168 [2024-12-09 13:13:12.297860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.168 [2024-12-09 13:13:12.339490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=290658 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 290658 /var/tmp/spdk2.sock 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 290658 ']' 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.427 13:13:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.427 [2024-12-09 13:13:12.565509] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:10.427 [2024-12-09 13:13:12.565574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290658 ] 00:06:10.427 [2024-12-09 13:13:12.660584] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.427 [2024-12-09 13:13:12.660610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.686 [2024-12-09 13:13:12.740960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.253 13:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.253 13:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.253 13:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 290647 00:06:11.253 13:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 290647 00:06:11.253 13:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.630 lslocks: write error 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 290647 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 290647 ']' 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 290647 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290647 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290647' 00:06:12.630 killing process with pid 290647 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 290647 00:06:12.630 13:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 290647 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 290658 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 290658 ']' 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 290658 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290658 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290658' 00:06:13.198 killing process with pid 290658 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 290658 00:06:13.198 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 290658 00:06:13.457 00:06:13.457 real 0m3.406s 00:06:13.457 user 0m3.587s 00:06:13.457 sys 0m1.264s 00:06:13.457 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.457 13:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 END TEST non_locking_app_on_locked_coremask 00:06:13.457 ************************************ 00:06:13.457 13:13:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:13.457 13:13:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.457 13:13:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.457 13:13:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 START TEST locking_app_on_unlocked_coremask 00:06:13.457 ************************************ 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=291222 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 291222 /var/tmp/spdk.sock 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 291222 ']' 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.457 13:13:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 [2024-12-09 13:13:15.695247] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:13.457 [2024-12-09 13:13:15.695303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291222 ] 00:06:13.717 [2024-12-09 13:13:15.780571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.717 [2024-12-09 13:13:15.780599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.717 [2024-12-09 13:13:15.822547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=291290 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 291290 /var/tmp/spdk2.sock 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 291290 ']' 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.977 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.977 [2024-12-09 13:13:16.065526] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:13.977 [2024-12-09 13:13:16.065601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291290 ] 00:06:13.977 [2024-12-09 13:13:16.167244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.236 [2024-12-09 13:13:16.255140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.804 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.804 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.804 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 291290 00:06:14.804 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.804 13:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 291290 00:06:15.741 lslocks: write error 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 291222 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 291222 ']' 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 291222 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291222 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291222' 00:06:15.741 killing process with pid 291222 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 291222 00:06:15.741 13:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 291222 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 291290 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 291290 ']' 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 291290 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291290 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291290' 00:06:16.309 killing process with pid 291290 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 291290 00:06:16.309 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 291290 00:06:16.569 00:06:16.569 real 0m2.974s 00:06:16.569 user 0m3.104s 00:06:16.569 sys 0m1.122s 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.569 ************************************ 00:06:16.569 END TEST locking_app_on_unlocked_coremask 00:06:16.569 ************************************ 00:06:16.569 13:13:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.569 13:13:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.569 13:13:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.569 13:13:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.569 ************************************ 00:06:16.569 START TEST locking_app_on_locked_coremask 00:06:16.569 ************************************ 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=291795 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 291795 /var/tmp/spdk.sock 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 291795 ']' 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.569 13:13:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.569 [2024-12-09 13:13:18.755674] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:16.569 [2024-12-09 13:13:18.755728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291795 ] 00:06:16.828 [2024-12-09 13:13:18.839237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.828 [2024-12-09 13:13:18.881165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=291834 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 291834 /var/tmp/spdk2.sock 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 291834 /var/tmp/spdk2.sock 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 291834 /var/tmp/spdk2.sock 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 291834 ']' 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.087 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.087 [2024-12-09 13:13:19.107562] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:17.087 [2024-12-09 13:13:19.107631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291834 ] 00:06:17.087 [2024-12-09 13:13:19.206042] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 291795 has claimed it. 00:06:17.087 [2024-12-09 13:13:19.206083] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.655 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (291834) - No such process 00:06:17.655 ERROR: process (pid: 291834) is no longer running 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 291795 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 291795 00:06:17.655 13:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.223 lslocks: write error 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 291795 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 291795 ']' 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 291795 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291795 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291795' 00:06:18.223 killing process with pid 291795 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 291795 00:06:18.223 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 291795 00:06:18.483 00:06:18.483 real 0m1.911s 00:06:18.483 user 0m2.035s 00:06:18.483 sys 0m0.678s 00:06:18.483 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.483 13:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.483 ************************************ 00:06:18.483 END TEST locking_app_on_locked_coremask 00:06:18.483 ************************************ 00:06:18.483 13:13:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.483 13:13:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.483 13:13:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.483 13:13:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.483 ************************************ 00:06:18.483 START TEST locking_overlapped_coremask 00:06:18.483 ************************************ 00:06:18.483 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:18.483 13:13:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=292189 00:06:18.483 13:13:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 292189 /var/tmp/spdk.sock 00:06:18.483 13:13:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.483 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 292189 ']' 00:06:18.741 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.741 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.741 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.741 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.742 13:13:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.742 [2024-12-09 13:13:20.749224] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:18.742 [2024-12-09 13:13:20.749283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292189 ] 00:06:18.742 [2024-12-09 13:13:20.834772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.742 [2024-12-09 13:13:20.879490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.742 [2024-12-09 13:13:20.879630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.742 [2024-12-09 13:13:20.879631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=292354 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 292354 /var/tmp/spdk2.sock 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 292354 /var/tmp/spdk2.sock 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 292354 /var/tmp/spdk2.sock 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 292354 ']' 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.000 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.000 [2024-12-09 13:13:21.120157] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:19.000 [2024-12-09 13:13:21.120250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292354 ] 00:06:19.000 [2024-12-09 13:13:21.222063] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 292189 has claimed it. 00:06:19.000 [2024-12-09 13:13:21.222101] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.566 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (292354) - No such process 00:06:19.566 ERROR: process (pid: 292354) is no longer running 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 292189 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 292189 ']' 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 292189 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.567 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292189 00:06:19.825 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.825 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.825 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292189' 00:06:19.825 killing process with pid 292189 00:06:19.825 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 292189 00:06:19.825 13:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 292189 00:06:20.084 00:06:20.084 real 0m1.419s 00:06:20.084 user 0m3.890s 00:06:20.084 sys 0m0.444s 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.084 ************************************ 00:06:20.084 END TEST locking_overlapped_coremask 00:06:20.084 ************************************ 00:06:20.084 13:13:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.084 13:13:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.084 13:13:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.084 13:13:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.084 ************************************ 00:06:20.084 START TEST locking_overlapped_coremask_via_rpc 00:06:20.084 ************************************ 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=292457 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 292457 /var/tmp/spdk.sock 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 292457 ']' 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.084 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.084 [2024-12-09 13:13:22.258745] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:20.084 [2024-12-09 13:13:22.258817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292457 ] 00:06:20.343 [2024-12-09 13:13:22.346980] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.343 [2024-12-09 13:13:22.347007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.343 [2024-12-09 13:13:22.391787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.343 [2024-12-09 13:13:22.391894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.343 [2024-12-09 13:13:22.391896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=292655 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 292655 /var/tmp/spdk2.sock 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 292655 ']' 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.601 13:13:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.601 [2024-12-09 13:13:22.628422] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:20.601 [2024-12-09 13:13:22.628514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292655 ] 00:06:20.601 [2024-12-09 13:13:22.730887] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.601 [2024-12-09 13:13:22.730918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.601 [2024-12-09 13:13:22.820130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.602 [2024-12-09 13:13:22.824632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.602 [2024-12-09 13:13:22.824634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.536 [2024-12-09 13:13:23.506651] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 292457 has claimed it. 00:06:21.536 request: 00:06:21.536 { 00:06:21.536 "method": "framework_enable_cpumask_locks", 00:06:21.536 "req_id": 1 00:06:21.536 } 00:06:21.536 Got JSON-RPC error response 00:06:21.536 response: 00:06:21.536 { 00:06:21.536 "code": -32603, 00:06:21.536 "message": "Failed to claim CPU core: 2" 00:06:21.536 } 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 292457 /var/tmp/spdk.sock 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 292457 ']' 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 292655 /var/tmp/spdk2.sock 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 292655 ']' 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.536 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.537 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.537 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.537 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.795 00:06:21.795 real 0m1.693s 00:06:21.795 user 0m0.795s 00:06:21.795 sys 0m0.171s 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.795 13:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.795 ************************************ 00:06:21.795 END TEST locking_overlapped_coremask_via_rpc 00:06:21.795 ************************************ 00:06:21.795 13:13:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.795 13:13:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 292457 ]] 00:06:21.795 13:13:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 292457 00:06:21.795 13:13:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 292457 ']' 00:06:21.795 13:13:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 292457 00:06:21.795 13:13:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.795 13:13:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.795 13:13:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292457 00:06:21.795 13:13:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.795 13:13:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.795 13:13:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292457' 00:06:21.795 killing process with pid 292457 00:06:21.795 13:13:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 292457 00:06:21.795 13:13:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 292457 00:06:22.362 13:13:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 292655 ]] 00:06:22.362 13:13:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 292655 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 292655 ']' 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 292655 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292655 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292655' 00:06:22.362 killing process with pid 292655 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 292655 00:06:22.362 13:13:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 292655 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 292457 ]] 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 292457 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 292457 ']' 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 292457 00:06:22.622 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (292457) - No such process 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 292457 is not found' 00:06:22.622 Process with pid 292457 is not found 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 292655 ]] 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 292655 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 292655 ']' 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 292655 00:06:22.622 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (292655) - No such process 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 292655 is not found' 00:06:22.622 Process with pid 292655 is not found 00:06:22.622 13:13:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.622 00:06:22.622 real 0m15.160s 00:06:22.622 user 0m25.411s 00:06:22.622 sys 0m5.859s 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.622 13:13:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.622 ************************************ 00:06:22.622 END TEST cpu_locks 00:06:22.622 ************************************ 00:06:22.622 00:06:22.622 real 0m39.678s 00:06:22.622 user 1m13.146s 00:06:22.622 sys 0m10.240s 00:06:22.622 13:13:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.622 13:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.622 ************************************ 00:06:22.622 END TEST event 00:06:22.622 ************************************ 00:06:22.622 13:13:24 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:22.622 13:13:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.622 13:13:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.622 13:13:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.622 ************************************ 00:06:22.622 START TEST thread 00:06:22.622 ************************************ 00:06:22.622 13:13:24 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:22.882 * Looking for test storage... 00:06:22.882 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:22.882 13:13:24 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.882 13:13:24 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.882 13:13:24 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.882 13:13:25 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.882 13:13:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.882 13:13:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.882 13:13:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.882 13:13:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.882 13:13:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.882 13:13:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.882 13:13:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.882 13:13:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.882 13:13:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.882 13:13:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.882 13:13:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.882 13:13:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:22.882 13:13:25 thread -- scripts/common.sh@345 -- # : 1 00:06:22.883 13:13:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.883 13:13:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.883 13:13:25 thread -- scripts/common.sh@365 -- # decimal 1 00:06:22.883 13:13:25 thread -- scripts/common.sh@353 -- # local d=1 00:06:22.883 13:13:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.883 13:13:25 thread -- scripts/common.sh@355 -- # echo 1 00:06:22.883 13:13:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.883 13:13:25 thread -- scripts/common.sh@366 -- # decimal 2 00:06:22.883 13:13:25 thread -- scripts/common.sh@353 -- # local d=2 00:06:22.883 13:13:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.883 13:13:25 thread -- scripts/common.sh@355 -- # echo 2 00:06:22.883 13:13:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.883 13:13:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.883 13:13:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.883 13:13:25 thread -- scripts/common.sh@368 -- # return 0 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.883 --rc genhtml_branch_coverage=1 00:06:22.883 --rc genhtml_function_coverage=1 00:06:22.883 --rc genhtml_legend=1 00:06:22.883 --rc geninfo_all_blocks=1 00:06:22.883 --rc geninfo_unexecuted_blocks=1 00:06:22.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.883 ' 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.883 --rc genhtml_branch_coverage=1 00:06:22.883 --rc genhtml_function_coverage=1 00:06:22.883 --rc genhtml_legend=1 00:06:22.883 --rc geninfo_all_blocks=1 00:06:22.883 --rc geninfo_unexecuted_blocks=1 00:06:22.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.883 ' 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.883 --rc genhtml_branch_coverage=1 00:06:22.883 --rc genhtml_function_coverage=1 00:06:22.883 --rc genhtml_legend=1 00:06:22.883 --rc geninfo_all_blocks=1 00:06:22.883 --rc geninfo_unexecuted_blocks=1 00:06:22.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.883 ' 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.883 --rc genhtml_branch_coverage=1 00:06:22.883 --rc genhtml_function_coverage=1 00:06:22.883 --rc genhtml_legend=1 00:06:22.883 --rc geninfo_all_blocks=1 00:06:22.883 --rc geninfo_unexecuted_blocks=1 00:06:22.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.883 ' 00:06:22.883 13:13:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.883 13:13:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 ************************************ 00:06:22.883 START TEST thread_poller_perf 00:06:22.883 ************************************ 00:06:22.883 13:13:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.883 [2024-12-09 13:13:25.097280] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:22.883 [2024-12-09 13:13:25.097361] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293044 ] 00:06:23.147 [2024-12-09 13:13:25.186407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.147 [2024-12-09 13:13:25.226205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.147 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.083 [2024-12-09T12:13:26.329Z] ====================================== 00:06:24.083 [2024-12-09T12:13:26.329Z] busy:2505818842 (cyc) 00:06:24.083 [2024-12-09T12:13:26.329Z] total_run_count: 845000 00:06:24.083 [2024-12-09T12:13:26.329Z] tsc_hz: 2500000000 (cyc) 00:06:24.083 [2024-12-09T12:13:26.329Z] ====================================== 00:06:24.083 [2024-12-09T12:13:26.329Z] poller_cost: 2965 (cyc), 1186 (nsec) 00:06:24.083 00:06:24.083 real 0m1.187s 00:06:24.083 user 0m1.090s 00:06:24.083 sys 0m0.092s 00:06:24.083 13:13:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.083 13:13:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.083 ************************************ 00:06:24.083 END TEST thread_poller_perf 00:06:24.083 ************************************ 00:06:24.083 13:13:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.083 13:13:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.083 13:13:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.083 13:13:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 ************************************ 00:06:24.342 START TEST thread_poller_perf 00:06:24.342 ************************************ 00:06:24.342 13:13:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.342 [2024-12-09 13:13:26.366387] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:24.342 [2024-12-09 13:13:26.366471] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293324 ] 00:06:24.342 [2024-12-09 13:13:26.455043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.342 [2024-12-09 13:13:26.494491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.342 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.719 [2024-12-09T12:13:27.965Z] ====================================== 00:06:25.719 [2024-12-09T12:13:27.965Z] busy:2501395606 (cyc) 00:06:25.719 [2024-12-09T12:13:27.965Z] total_run_count: 12100000 00:06:25.719 [2024-12-09T12:13:27.965Z] tsc_hz: 2500000000 (cyc) 00:06:25.719 [2024-12-09T12:13:27.965Z] ====================================== 00:06:25.719 [2024-12-09T12:13:27.965Z] poller_cost: 206 (cyc), 82 (nsec) 00:06:25.719 00:06:25.719 real 0m1.182s 00:06:25.719 user 0m1.093s 00:06:25.719 sys 0m0.085s 00:06:25.719 13:13:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.719 13:13:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.719 ************************************ 00:06:25.719 END TEST thread_poller_perf 00:06:25.719 ************************************ 00:06:25.719 13:13:27 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:25.719 13:13:27 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:25.719 13:13:27 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.719 13:13:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.719 13:13:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.719 ************************************ 00:06:25.719 START TEST thread_spdk_lock 00:06:25.719 ************************************ 00:06:25.719 13:13:27 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:25.719 [2024-12-09 13:13:27.630259] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:25.719 [2024-12-09 13:13:27.630354] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293612 ] 00:06:25.719 [2024-12-09 13:13:27.718654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.719 [2024-12-09 13:13:27.758636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.719 [2024-12-09 13:13:27.758638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.287 [2024-12-09 13:13:28.254776] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 989:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.287 [2024-12-09 13:13:28.254815] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3140:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:26.287 [2024-12-09 13:13:28.254826] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x14dea40 00:06:26.287 [2024-12-09 13:13:28.255545] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.287 [2024-12-09 13:13:28.255648] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1050:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.287 [2024-12-09 13:13:28.255666] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.287 Starting test contend 00:06:26.287 Worker Delay Wait us Hold us Total us 00:06:26.287 0 3 164938 188158 353097 00:06:26.287 1 5 79693 289504 369198 00:06:26.287 PASS test contend 00:06:26.287 Starting test hold_by_poller 00:06:26.287 PASS test hold_by_poller 00:06:26.287 Starting test hold_by_message 00:06:26.287 PASS test hold_by_message 00:06:26.287 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:26.287 100014 assertions passed 00:06:26.287 0 assertions failed 00:06:26.287 00:06:26.287 real 0m0.677s 00:06:26.287 user 0m1.084s 00:06:26.287 sys 0m0.086s 00:06:26.287 13:13:28 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.287 13:13:28 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:26.287 ************************************ 00:06:26.287 END TEST thread_spdk_lock 00:06:26.287 ************************************ 00:06:26.287 00:06:26.287 real 0m3.487s 00:06:26.287 user 0m3.466s 00:06:26.287 sys 0m0.543s 00:06:26.287 13:13:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.287 13:13:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.287 ************************************ 00:06:26.287 END TEST thread 00:06:26.287 ************************************ 00:06:26.287 13:13:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.287 13:13:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.287 13:13:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.287 13:13:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.287 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.287 ************************************ 00:06:26.287 START TEST app_cmdline 00:06:26.287 ************************************ 00:06:26.287 13:13:28 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.287 * Looking for test storage... 00:06:26.287 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:26.287 13:13:28 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.287 13:13:28 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.287 13:13:28 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.547 13:13:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.547 --rc genhtml_branch_coverage=1 00:06:26.547 --rc genhtml_function_coverage=1 00:06:26.547 --rc genhtml_legend=1 00:06:26.547 --rc geninfo_all_blocks=1 00:06:26.547 --rc geninfo_unexecuted_blocks=1 00:06:26.547 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.547 ' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.547 --rc genhtml_branch_coverage=1 00:06:26.547 --rc genhtml_function_coverage=1 00:06:26.547 --rc genhtml_legend=1 00:06:26.547 --rc geninfo_all_blocks=1 00:06:26.547 --rc geninfo_unexecuted_blocks=1 00:06:26.547 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.547 ' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.547 --rc genhtml_branch_coverage=1 00:06:26.547 --rc genhtml_function_coverage=1 00:06:26.547 --rc genhtml_legend=1 00:06:26.547 --rc geninfo_all_blocks=1 00:06:26.547 --rc geninfo_unexecuted_blocks=1 00:06:26.547 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.547 ' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.547 --rc genhtml_branch_coverage=1 00:06:26.547 --rc genhtml_function_coverage=1 00:06:26.547 --rc genhtml_legend=1 00:06:26.547 --rc geninfo_all_blocks=1 00:06:26.547 --rc geninfo_unexecuted_blocks=1 00:06:26.547 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.547 ' 00:06:26.547 13:13:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.547 13:13:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.547 13:13:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=293823 00:06:26.547 13:13:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 293823 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 293823 ']' 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.547 13:13:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.547 [2024-12-09 13:13:28.631235] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:26.547 [2024-12-09 13:13:28.631301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293823 ] 00:06:26.547 [2024-12-09 13:13:28.707991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.547 [2024-12-09 13:13:28.747944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.807 13:13:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.807 13:13:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:26.807 13:13:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:27.065 { 00:06:27.065 "version": "SPDK v25.01-pre git sha1 496bfd677", 00:06:27.065 "fields": { 00:06:27.065 "major": 25, 00:06:27.065 "minor": 1, 00:06:27.065 "patch": 0, 00:06:27.065 "suffix": "-pre", 00:06:27.065 "commit": "496bfd677" 00:06:27.065 } 00:06:27.065 } 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.065 13:13:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:27.065 13:13:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:27.066 13:13:29 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.325 request: 00:06:27.325 { 00:06:27.325 "method": "env_dpdk_get_mem_stats", 00:06:27.325 "req_id": 1 00:06:27.325 } 00:06:27.325 Got JSON-RPC error response 00:06:27.325 response: 00:06:27.325 { 00:06:27.325 "code": -32601, 00:06:27.325 "message": "Method not found" 00:06:27.325 } 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.325 13:13:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 293823 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 293823 ']' 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 293823 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 293823 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 293823' 00:06:27.325 killing process with pid 293823 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 293823 00:06:27.325 13:13:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 293823 00:06:27.584 00:06:27.584 real 0m1.329s 00:06:27.584 user 0m1.509s 00:06:27.584 sys 0m0.510s 00:06:27.584 13:13:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.584 13:13:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.584 ************************************ 00:06:27.584 END TEST app_cmdline 00:06:27.584 ************************************ 00:06:27.584 13:13:29 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:27.584 13:13:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.584 13:13:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.584 13:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:27.584 ************************************ 00:06:27.584 START TEST version 00:06:27.584 ************************************ 00:06:27.843 13:13:29 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:27.843 * Looking for test storage... 00:06:27.843 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:27.843 13:13:29 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.843 13:13:29 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.843 13:13:29 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.843 13:13:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.843 13:13:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.843 13:13:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.843 13:13:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.843 13:13:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.843 13:13:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.843 13:13:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.843 13:13:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.843 13:13:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.843 13:13:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.843 13:13:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.843 13:13:30 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.843 13:13:30 version -- scripts/common.sh@345 -- # : 1 00:06:27.843 13:13:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.843 13:13:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.843 13:13:30 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.843 13:13:30 version -- scripts/common.sh@353 -- # local d=1 00:06:27.843 13:13:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.843 13:13:30 version -- scripts/common.sh@355 -- # echo 1 00:06:27.843 13:13:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.843 13:13:30 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.843 13:13:30 version -- scripts/common.sh@353 -- # local d=2 00:06:27.843 13:13:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.843 13:13:30 version -- scripts/common.sh@355 -- # echo 2 00:06:27.843 13:13:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.843 13:13:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.843 13:13:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.843 13:13:30 version -- scripts/common.sh@368 -- # return 0 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.843 --rc genhtml_branch_coverage=1 00:06:27.843 --rc genhtml_function_coverage=1 00:06:27.843 --rc genhtml_legend=1 00:06:27.843 --rc geninfo_all_blocks=1 00:06:27.843 --rc geninfo_unexecuted_blocks=1 00:06:27.843 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.843 ' 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.843 --rc genhtml_branch_coverage=1 00:06:27.843 --rc genhtml_function_coverage=1 00:06:27.843 --rc genhtml_legend=1 00:06:27.843 --rc geninfo_all_blocks=1 00:06:27.843 --rc geninfo_unexecuted_blocks=1 00:06:27.843 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.843 ' 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.843 --rc genhtml_branch_coverage=1 00:06:27.843 --rc genhtml_function_coverage=1 00:06:27.843 --rc genhtml_legend=1 00:06:27.843 --rc geninfo_all_blocks=1 00:06:27.843 --rc geninfo_unexecuted_blocks=1 00:06:27.843 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.843 ' 00:06:27.843 13:13:30 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.843 --rc genhtml_branch_coverage=1 00:06:27.843 --rc genhtml_function_coverage=1 00:06:27.843 --rc genhtml_legend=1 00:06:27.843 --rc geninfo_all_blocks=1 00:06:27.843 --rc geninfo_unexecuted_blocks=1 00:06:27.843 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.843 ' 00:06:27.843 13:13:30 version -- app/version.sh@17 -- # get_header_version major 00:06:27.843 13:13:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.843 13:13:30 version -- app/version.sh@14 -- # cut -f2 00:06:27.843 13:13:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.843 13:13:30 version -- app/version.sh@17 -- # major=25 00:06:27.843 13:13:30 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.843 13:13:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.843 13:13:30 version -- app/version.sh@14 -- # cut -f2 00:06:27.843 13:13:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.843 13:13:30 version -- app/version.sh@18 -- # minor=1 00:06:27.843 13:13:30 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.843 13:13:30 version -- app/version.sh@14 -- # cut -f2 00:06:27.844 13:13:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.844 13:13:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.844 13:13:30 version -- app/version.sh@19 -- # patch=0 00:06:27.844 13:13:30 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.844 13:13:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.844 13:13:30 version -- app/version.sh@14 -- # cut -f2 00:06:27.844 13:13:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.844 13:13:30 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.844 13:13:30 version -- app/version.sh@22 -- # version=25.1 00:06:27.844 13:13:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.844 13:13:30 version -- app/version.sh@28 -- # version=25.1rc0 00:06:27.844 13:13:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:27.844 13:13:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.103 13:13:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.103 13:13:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.103 00:06:28.103 real 0m0.271s 00:06:28.103 user 0m0.151s 00:06:28.103 sys 0m0.173s 00:06:28.103 13:13:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.103 13:13:30 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.103 ************************************ 00:06:28.103 END TEST version 00:06:28.103 ************************************ 00:06:28.103 13:13:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@194 -- # uname -s 00:06:28.103 13:13:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:28.103 13:13:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.103 13:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:28.103 13:13:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:28.103 13:13:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:28.103 13:13:30 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:28.103 13:13:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.103 13:13:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.103 13:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:28.103 ************************************ 00:06:28.103 START TEST llvm_fuzz 00:06:28.103 ************************************ 00:06:28.103 13:13:30 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:28.103 * Looking for test storage... 00:06:28.363 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.363 13:13:30 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.363 --rc genhtml_branch_coverage=1 00:06:28.363 --rc genhtml_function_coverage=1 00:06:28.363 --rc genhtml_legend=1 00:06:28.363 --rc geninfo_all_blocks=1 00:06:28.363 --rc geninfo_unexecuted_blocks=1 00:06:28.363 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.363 ' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.363 --rc genhtml_branch_coverage=1 00:06:28.363 --rc genhtml_function_coverage=1 00:06:28.363 --rc genhtml_legend=1 00:06:28.363 --rc geninfo_all_blocks=1 00:06:28.363 --rc geninfo_unexecuted_blocks=1 00:06:28.363 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.363 ' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.363 --rc genhtml_branch_coverage=1 00:06:28.363 --rc genhtml_function_coverage=1 00:06:28.363 --rc genhtml_legend=1 00:06:28.363 --rc geninfo_all_blocks=1 00:06:28.363 --rc geninfo_unexecuted_blocks=1 00:06:28.363 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.363 ' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.363 --rc genhtml_branch_coverage=1 00:06:28.363 --rc genhtml_function_coverage=1 00:06:28.363 --rc genhtml_legend=1 00:06:28.363 --rc geninfo_all_blocks=1 00:06:28.363 --rc geninfo_unexecuted_blocks=1 00:06:28.363 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.363 ' 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:28.363 13:13:30 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.363 13:13:30 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:28.363 ************************************ 00:06:28.363 START TEST nvmf_llvm_fuzz 00:06:28.363 ************************************ 00:06:28.363 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:28.363 * Looking for test storage... 00:06:28.363 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.363 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.363 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.363 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.626 --rc genhtml_branch_coverage=1 00:06:28.626 --rc genhtml_function_coverage=1 00:06:28.626 --rc genhtml_legend=1 00:06:28.626 --rc geninfo_all_blocks=1 00:06:28.626 --rc geninfo_unexecuted_blocks=1 00:06:28.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.626 ' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.626 --rc genhtml_branch_coverage=1 00:06:28.626 --rc genhtml_function_coverage=1 00:06:28.626 --rc genhtml_legend=1 00:06:28.626 --rc geninfo_all_blocks=1 00:06:28.626 --rc geninfo_unexecuted_blocks=1 00:06:28.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.626 ' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.626 --rc genhtml_branch_coverage=1 00:06:28.626 --rc genhtml_function_coverage=1 00:06:28.626 --rc genhtml_legend=1 00:06:28.626 --rc geninfo_all_blocks=1 00:06:28.626 --rc geninfo_unexecuted_blocks=1 00:06:28.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.626 ' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.626 --rc genhtml_branch_coverage=1 00:06:28.626 --rc genhtml_function_coverage=1 00:06:28.626 --rc genhtml_legend=1 00:06:28.626 --rc geninfo_all_blocks=1 00:06:28.626 --rc geninfo_unexecuted_blocks=1 00:06:28.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.626 ' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:28.626 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:28.627 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:28.627 #define SPDK_CONFIG_H 00:06:28.627 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:28.627 #define SPDK_CONFIG_APPS 1 00:06:28.627 #define SPDK_CONFIG_ARCH native 00:06:28.627 #undef SPDK_CONFIG_ASAN 00:06:28.627 #undef SPDK_CONFIG_AVAHI 00:06:28.627 #undef SPDK_CONFIG_CET 00:06:28.627 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:28.627 #define SPDK_CONFIG_COVERAGE 1 00:06:28.627 #define SPDK_CONFIG_CROSS_PREFIX 00:06:28.627 #undef SPDK_CONFIG_CRYPTO 00:06:28.627 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:28.627 #undef SPDK_CONFIG_CUSTOMOCF 00:06:28.627 #undef SPDK_CONFIG_DAOS 00:06:28.627 #define SPDK_CONFIG_DAOS_DIR 00:06:28.627 #define SPDK_CONFIG_DEBUG 1 00:06:28.627 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:28.627 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:28.627 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:28.627 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:28.627 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:28.627 #undef SPDK_CONFIG_DPDK_UADK 00:06:28.627 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:28.627 #define SPDK_CONFIG_EXAMPLES 1 00:06:28.627 #undef SPDK_CONFIG_FC 00:06:28.627 #define SPDK_CONFIG_FC_PATH 00:06:28.627 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:28.627 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:28.627 #define SPDK_CONFIG_FSDEV 1 00:06:28.627 #undef SPDK_CONFIG_FUSE 00:06:28.627 #define SPDK_CONFIG_FUZZER 1 00:06:28.627 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:28.627 #undef SPDK_CONFIG_GOLANG 00:06:28.627 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:28.627 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:28.627 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:28.627 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:28.627 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:28.627 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:28.627 #undef SPDK_CONFIG_HAVE_LZ4 00:06:28.627 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:28.627 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:28.627 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:28.627 #define SPDK_CONFIG_IDXD 1 00:06:28.627 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:28.627 #undef SPDK_CONFIG_IPSEC_MB 00:06:28.627 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:28.627 #define SPDK_CONFIG_ISAL 1 00:06:28.627 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:28.627 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:28.627 #define SPDK_CONFIG_LIBDIR 00:06:28.627 #undef SPDK_CONFIG_LTO 00:06:28.627 #define SPDK_CONFIG_MAX_LCORES 128 00:06:28.627 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:28.627 #define SPDK_CONFIG_NVME_CUSE 1 00:06:28.627 #undef SPDK_CONFIG_OCF 00:06:28.627 #define SPDK_CONFIG_OCF_PATH 00:06:28.627 #define SPDK_CONFIG_OPENSSL_PATH 00:06:28.627 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:28.627 #define SPDK_CONFIG_PGO_DIR 00:06:28.627 #undef SPDK_CONFIG_PGO_USE 00:06:28.627 #define SPDK_CONFIG_PREFIX /usr/local 00:06:28.627 #undef SPDK_CONFIG_RAID5F 00:06:28.627 #undef SPDK_CONFIG_RBD 00:06:28.627 #define SPDK_CONFIG_RDMA 1 00:06:28.627 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:28.627 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:28.627 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:28.627 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:28.627 #undef SPDK_CONFIG_SHARED 00:06:28.628 #undef SPDK_CONFIG_SMA 00:06:28.628 #define SPDK_CONFIG_TESTS 1 00:06:28.628 #undef SPDK_CONFIG_TSAN 00:06:28.628 #define SPDK_CONFIG_UBLK 1 00:06:28.628 #define SPDK_CONFIG_UBSAN 1 00:06:28.628 #undef SPDK_CONFIG_UNIT_TESTS 00:06:28.628 #undef SPDK_CONFIG_URING 00:06:28.628 #define SPDK_CONFIG_URING_PATH 00:06:28.628 #undef SPDK_CONFIG_URING_ZNS 00:06:28.628 #undef SPDK_CONFIG_USDT 00:06:28.628 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:28.628 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:28.628 #define SPDK_CONFIG_VFIO_USER 1 00:06:28.628 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:28.628 #define SPDK_CONFIG_VHOST 1 00:06:28.628 #define SPDK_CONFIG_VIRTIO 1 00:06:28.628 #undef SPDK_CONFIG_VTUNE 00:06:28.628 #define SPDK_CONFIG_VTUNE_DIR 00:06:28.628 #define SPDK_CONFIG_WERROR 1 00:06:28.628 #define SPDK_CONFIG_WPDK_DIR 00:06:28.628 #undef SPDK_CONFIG_XNVME 00:06:28.628 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:28.628 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.629 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 294377 ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 294377 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.utyaRc 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.utyaRc/tests/nvmf /tmp/spdk.utyaRc 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=58419761152 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67015417856 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8595656704 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=33504280576 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507708928 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=13397094400 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=13403086848 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=33507393536 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507708928 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6701527040 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6701539328 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:28.630 * Looking for test storage... 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=58419761152 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:28.630 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10810249216 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.631 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.631 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.891 --rc genhtml_branch_coverage=1 00:06:28.891 --rc genhtml_function_coverage=1 00:06:28.891 --rc genhtml_legend=1 00:06:28.891 --rc geninfo_all_blocks=1 00:06:28.891 --rc geninfo_unexecuted_blocks=1 00:06:28.891 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.891 ' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.891 --rc genhtml_branch_coverage=1 00:06:28.891 --rc genhtml_function_coverage=1 00:06:28.891 --rc genhtml_legend=1 00:06:28.891 --rc geninfo_all_blocks=1 00:06:28.891 --rc geninfo_unexecuted_blocks=1 00:06:28.891 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.891 ' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.891 --rc genhtml_branch_coverage=1 00:06:28.891 --rc genhtml_function_coverage=1 00:06:28.891 --rc genhtml_legend=1 00:06:28.891 --rc geninfo_all_blocks=1 00:06:28.891 --rc geninfo_unexecuted_blocks=1 00:06:28.891 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.891 ' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.891 --rc genhtml_branch_coverage=1 00:06:28.891 --rc genhtml_function_coverage=1 00:06:28.891 --rc genhtml_legend=1 00:06:28.891 --rc geninfo_all_blocks=1 00:06:28.891 --rc geninfo_unexecuted_blocks=1 00:06:28.891 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.891 ' 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:28.891 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.892 13:13:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:28.892 [2024-12-09 13:13:30.983212] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:28.892 [2024-12-09 13:13:30.983289] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294442 ] 00:06:29.151 [2024-12-09 13:13:31.195117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.151 [2024-12-09 13:13:31.230665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.151 [2024-12-09 13:13:31.289995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.151 [2024-12-09 13:13:31.306298] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:29.151 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.151 INFO: Seed: 3781644490 00:06:29.151 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:29.151 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:29.151 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:29.151 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.151 #2 INITED exec/s: 0 rss: 65Mb 00:06:29.151 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.151 This may also happen if the target rejected all inputs we tried so far 00:06:29.151 [2024-12-09 13:13:31.371619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.151 [2024-12-09 13:13:31.371647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.668 NEW_FUNC[1/715]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:29.668 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.668 #17 NEW cov: 12103 ft: 12104 corp: 2/111b lim: 320 exec/s: 0 rss: 72Mb L: 110/110 MS: 5 CMP-EraseBytes-ChangeBit-CrossOver-InsertRepeatedBytes- DE: "\000}"- 00:06:29.668 [2024-12-09 13:13:31.702858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.668 [2024-12-09 13:13:31.702934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.668 #18 NEW cov: 12216 ft: 12823 corp: 3/222b lim: 320 exec/s: 0 rss: 72Mb L: 111/111 MS: 1 InsertByte- 00:06:29.668 [2024-12-09 13:13:31.772576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.668 [2024-12-09 13:13:31.772607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.668 #19 NEW cov: 12222 ft: 12976 corp: 4/337b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 CMP- DE: "\003\000\000\000"- 00:06:29.668 [2024-12-09 13:13:31.832733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.668 [2024-12-09 13:13:31.832758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.668 #20 NEW cov: 12307 ft: 13218 corp: 5/452b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ChangeBinInt- 00:06:29.668 [2024-12-09 13:13:31.892882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.668 [2024-12-09 13:13:31.892907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #21 NEW cov: 12307 ft: 13507 corp: 6/562b lim: 320 exec/s: 0 rss: 72Mb L: 110/115 MS: 1 ChangeBit- 00:06:29.927 [2024-12-09 13:13:31.932981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.927 [2024-12-09 13:13:31.933006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #22 NEW cov: 12307 ft: 13616 corp: 7/677b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ChangeBit- 00:06:29.927 [2024-12-09 13:13:31.973128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.927 [2024-12-09 13:13:31.973152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #23 NEW cov: 12307 ft: 13688 corp: 8/792b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ChangeByte- 00:06:29.927 [2024-12-09 13:13:32.033298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.927 [2024-12-09 13:13:32.033322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #24 NEW cov: 12307 ft: 13740 corp: 9/907b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ShuffleBytes- 00:06:29.927 [2024-12-09 13:13:32.073435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.927 [2024-12-09 13:13:32.073460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #25 NEW cov: 12307 ft: 13789 corp: 10/974b lim: 320 exec/s: 0 rss: 72Mb L: 67/115 MS: 1 EraseBytes- 00:06:29.927 [2024-12-09 13:13:32.113540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:29.927 [2024-12-09 13:13:32.113565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.927 #28 NEW cov: 12307 ft: 13837 corp: 11/1041b lim: 320 exec/s: 0 rss: 72Mb L: 67/115 MS: 3 EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:06:30.186 [2024-12-09 13:13:32.173804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.186 [2024-12-09 13:13:32.173829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.186 #29 NEW cov: 12307 ft: 13883 corp: 12/1108b lim: 320 exec/s: 0 rss: 73Mb L: 67/115 MS: 1 ChangeBit- 00:06:30.186 [2024-12-09 13:13:32.233888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:e6e6e6e6 cdw11:e6e6e6e6 00:06:30.186 [2024-12-09 13:13:32.233913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.186 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:30.186 #30 NEW cov: 12330 ft: 13920 corp: 13/1229b lim: 320 exec/s: 0 rss: 73Mb L: 121/121 MS: 1 InsertRepeatedBytes- 00:06:30.186 [2024-12-09 13:13:32.274016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.186 [2024-12-09 13:13:32.274041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.186 #31 NEW cov: 12330 ft: 13964 corp: 14/1296b lim: 320 exec/s: 0 rss: 73Mb L: 67/121 MS: 1 ChangeBit- 00:06:30.186 [2024-12-09 13:13:32.314091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.186 [2024-12-09 13:13:32.314115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.186 #32 NEW cov: 12330 ft: 13987 corp: 15/1363b lim: 320 exec/s: 32 rss: 73Mb L: 67/121 MS: 1 ChangeByte- 00:06:30.186 [2024-12-09 13:13:32.374278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.186 [2024-12-09 13:13:32.374303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.186 #33 NEW cov: 12330 ft: 13991 corp: 16/1478b lim: 320 exec/s: 33 rss: 73Mb L: 115/121 MS: 1 ShuffleBytes- 00:06:30.445 [2024-12-09 13:13:32.434482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.445 [2024-12-09 13:13:32.434506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.445 #34 NEW cov: 12330 ft: 14049 corp: 17/1545b lim: 320 exec/s: 34 rss: 73Mb L: 67/121 MS: 1 ChangeByte- 00:06:30.445 [2024-12-09 13:13:32.474531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.445 [2024-12-09 13:13:32.474556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.445 #35 NEW cov: 12330 ft: 14068 corp: 18/1660b lim: 320 exec/s: 35 rss: 73Mb L: 115/121 MS: 1 ChangeBit- 00:06:30.445 [2024-12-09 13:13:32.534756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:e6e6e6e6 cdw11:e6e6e6e6 00:06:30.445 [2024-12-09 13:13:32.534782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.445 #36 NEW cov: 12330 ft: 14086 corp: 19/1781b lim: 320 exec/s: 36 rss: 73Mb L: 121/121 MS: 1 ChangeBit- 00:06:30.445 [2024-12-09 13:13:32.594920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.445 [2024-12-09 13:13:32.594945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.445 #37 NEW cov: 12330 ft: 14119 corp: 20/1907b lim: 320 exec/s: 37 rss: 73Mb L: 126/126 MS: 1 CrossOver- 00:06:30.445 [2024-12-09 13:13:32.655137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.445 [2024-12-09 13:13:32.655162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.445 [2024-12-09 13:13:32.655227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.445 [2024-12-09 13:13:32.655241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.445 #38 NEW cov: 12330 ft: 14313 corp: 21/2087b lim: 320 exec/s: 38 rss: 73Mb L: 180/180 MS: 1 CrossOver- 00:06:30.704 [2024-12-09 13:13:32.695152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.704 [2024-12-09 13:13:32.695177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 #39 NEW cov: 12330 ft: 14328 corp: 22/2155b lim: 320 exec/s: 39 rss: 73Mb L: 68/180 MS: 1 InsertByte- 00:06:30.704 [2024-12-09 13:13:32.755323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.704 [2024-12-09 13:13:32.755348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 #40 NEW cov: 12330 ft: 14344 corp: 23/2224b lim: 320 exec/s: 40 rss: 73Mb L: 69/180 MS: 1 PersAutoDict- DE: "\000}"- 00:06:30.704 [2024-12-09 13:13:32.795458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.704 [2024-12-09 13:13:32.795482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 #41 NEW cov: 12330 ft: 14364 corp: 24/2327b lim: 320 exec/s: 41 rss: 73Mb L: 103/180 MS: 1 CopyPart- 00:06:30.704 [2024-12-09 13:13:32.835651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:d6000000 00:06:30.704 [2024-12-09 13:13:32.835676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 [2024-12-09 13:13:32.835738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d6) qid:0 cid:5 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:06:30.704 [2024-12-09 13:13:32.835753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.704 NEW_FUNC[1/1]: 0x1976e38 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:06:30.704 #42 NEW cov: 12354 ft: 14420 corp: 25/2516b lim: 320 exec/s: 42 rss: 73Mb L: 189/189 MS: 1 InsertRepeatedBytes- 00:06:30.704 [2024-12-09 13:13:32.875663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.704 [2024-12-09 13:13:32.875687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 #43 NEW cov: 12354 ft: 14448 corp: 26/2631b lim: 320 exec/s: 43 rss: 73Mb L: 115/189 MS: 1 ChangeBit- 00:06:30.704 [2024-12-09 13:13:32.915887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:d6000000 00:06:30.704 [2024-12-09 13:13:32.915912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.704 [2024-12-09 13:13:32.915974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d6) qid:0 cid:5 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x33d6d6d6d6d6d6d6 00:06:30.704 [2024-12-09 13:13:32.915989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.963 #44 NEW cov: 12354 ft: 14466 corp: 27/2820b lim: 320 exec/s: 44 rss: 73Mb L: 189/189 MS: 1 ChangeBinInt- 00:06:30.963 [2024-12-09 13:13:32.976013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:d6000000 00:06:30.963 [2024-12-09 13:13:32.976039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.963 [2024-12-09 13:13:32.976099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d6) qid:0 cid:5 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x33d6d6d6d6d6d6d6 00:06:30.963 [2024-12-09 13:13:32.976113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.963 #45 NEW cov: 12354 ft: 14473 corp: 28/3009b lim: 320 exec/s: 45 rss: 74Mb L: 189/189 MS: 1 ChangeBit- 00:06:30.963 [2024-12-09 13:13:33.036090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.963 [2024-12-09 13:13:33.036116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.963 #46 NEW cov: 12354 ft: 14478 corp: 29/3120b lim: 320 exec/s: 46 rss: 74Mb L: 111/189 MS: 1 InsertByte- 00:06:30.963 [2024-12-09 13:13:33.076174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.963 [2024-12-09 13:13:33.076198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.963 #47 NEW cov: 12354 ft: 14509 corp: 30/3231b lim: 320 exec/s: 47 rss: 74Mb L: 111/189 MS: 1 ChangeBinInt- 00:06:30.963 [2024-12-09 13:13:33.116279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:30.963 [2024-12-09 13:13:33.116303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.963 #48 NEW cov: 12354 ft: 14516 corp: 31/3298b lim: 320 exec/s: 48 rss: 74Mb L: 67/189 MS: 1 ShuffleBytes- 00:06:30.963 [2024-12-09 13:13:33.156386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.963 [2024-12-09 13:13:33.156411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.963 #49 NEW cov: 12354 ft: 14609 corp: 32/3424b lim: 320 exec/s: 49 rss: 74Mb L: 126/189 MS: 1 ChangeBinInt- 00:06:31.223 [2024-12-09 13:13:33.216624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:31.223 [2024-12-09 13:13:33.216649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #50 NEW cov: 12354 ft: 14615 corp: 33/3492b lim: 320 exec/s: 50 rss: 74Mb L: 68/189 MS: 1 InsertByte- 00:06:31.223 [2024-12-09 13:13:33.256662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.223 [2024-12-09 13:13:33.256687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #51 NEW cov: 12354 ft: 14624 corp: 34/3618b lim: 320 exec/s: 51 rss: 74Mb L: 126/189 MS: 1 ShuffleBytes- 00:06:31.223 [2024-12-09 13:13:33.296884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:d6000000 00:06:31.223 [2024-12-09 13:13:33.296909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 [2024-12-09 13:13:33.296970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d6) qid:0 cid:5 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x33d6d6d6d6d6d6d6 00:06:31.223 [2024-12-09 13:13:33.296984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.223 #52 NEW cov: 12354 ft: 14634 corp: 35/3807b lim: 320 exec/s: 52 rss: 74Mb L: 189/189 MS: 1 ChangeByte- 00:06:31.223 [2024-12-09 13:13:33.336919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecececec 00:06:31.223 [2024-12-09 13:13:33.336944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #53 NEW cov: 12354 ft: 14651 corp: 36/3910b lim: 320 exec/s: 26 rss: 74Mb L: 103/189 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:06:31.223 #53 DONE cov: 12354 ft: 14651 corp: 36/3910b lim: 320 exec/s: 26 rss: 74Mb 00:06:31.223 ###### Recommended dictionary. ###### 00:06:31.223 "\000}" # Uses: 1 00:06:31.223 "\003\000\000\000" # Uses: 1 00:06:31.223 ###### End of recommended dictionary. ###### 00:06:31.223 Done 53 runs in 2 second(s) 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.482 13:13:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:31.482 [2024-12-09 13:13:33.529214] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:31.482 [2024-12-09 13:13:33.529284] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294967 ] 00:06:31.741 [2024-12-09 13:13:33.731217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.741 [2024-12-09 13:13:33.765198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.741 [2024-12-09 13:13:33.824259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.741 [2024-12-09 13:13:33.840553] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:31.741 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.741 INFO: Seed: 2021658439 00:06:31.741 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:31.741 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:31.741 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:31.741 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.741 #2 INITED exec/s: 0 rss: 65Mb 00:06:31.741 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.741 This may also happen if the target rejected all inputs we tried so far 00:06:31.741 [2024-12-09 13:13:33.895858] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:31.741 [2024-12-09 13:13:33.895978] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.741 [2024-12-09 13:13:33.896087] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.741 [2024-12-09 13:13:33.896195] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.741 [2024-12-09 13:13:33.896405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-12-09 13:13:33.896436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-12-09 13:13:33.896496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.742 [2024-12-09 13:13:33.896511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 [2024-12-09 13:13:33.896567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.742 [2024-12-09 13:13:33.896582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.742 [2024-12-09 13:13:33.896640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.742 [2024-12-09 13:13:33.896657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.000 NEW_FUNC[1/717]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:32.000 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.000 #4 NEW cov: 12202 ft: 12201 corp: 2/27b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:06:32.000 [2024-12-09 13:13:34.226817] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.000 [2024-12-09 13:13:34.226947] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.000 [2024-12-09 13:13:34.227058] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.000 [2024-12-09 13:13:34.227166] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.000 [2024-12-09 13:13:34.227414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.000 [2024-12-09 13:13:34.227470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.000 [2024-12-09 13:13:34.227555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.000 [2024-12-09 13:13:34.227582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.000 [2024-12-09 13:13:34.227669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.000 [2024-12-09 13:13:34.227695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.000 [2024-12-09 13:13:34.227773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.000 [2024-12-09 13:13:34.227798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 #5 NEW cov: 12315 ft: 12949 corp: 3/54b lim: 30 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 CrossOver- 00:06:32.259 [2024-12-09 13:13:34.296797] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.259 [2024-12-09 13:13:34.296909] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.297012] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.297112] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.297313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.297339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.297395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.297410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.297465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.297478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.297536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.297550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 #11 NEW cov: 12321 ft: 13178 corp: 4/83b lim: 30 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:32.259 [2024-12-09 13:13:34.336836] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.259 [2024-12-09 13:13:34.336961] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.259 [2024-12-09 13:13:34.337064] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.337166] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.337368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.337395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.337450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.337464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.337519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.337533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.337592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.337604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 #12 NEW cov: 12406 ft: 13475 corp: 5/109b lim: 30 exec/s: 0 rss: 72Mb L: 26/29 MS: 1 ChangeByte- 00:06:32.259 [2024-12-09 13:13:34.376917] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.259 [2024-12-09 13:13:34.377044] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.259 [2024-12-09 13:13:34.377243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.377269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.377326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.377341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 #13 NEW cov: 12429 ft: 14139 corp: 6/124b lim: 30 exec/s: 0 rss: 72Mb L: 15/29 MS: 1 InsertRepeatedBytes- 00:06:32.259 [2024-12-09 13:13:34.416988] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.417209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:bcff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.417235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 #17 NEW cov: 12429 ft: 14588 corp: 7/132b lim: 30 exec/s: 0 rss: 72Mb L: 8/29 MS: 4 InsertByte-ChangeBit-EraseBytes-CrossOver- 00:06:32.259 [2024-12-09 13:13:34.457196] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.457328] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.457432] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.457536] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.259 [2024-12-09 13:13:34.457755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03038303 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.457782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.457837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.457852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.457907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:06ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.457920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-12-09 13:13:34.457973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.259 [2024-12-09 13:13:34.457987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 #18 NEW cov: 12429 ft: 14669 corp: 8/161b lim: 30 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:32.518 [2024-12-09 13:13:34.517314] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.518 [2024-12-09 13:13:34.517443] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.518 [2024-12-09 13:13:34.517649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.517676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-12-09 13:13:34.517730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.517745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 #19 NEW cov: 12429 ft: 14694 corp: 9/176b lim: 30 exec/s: 0 rss: 72Mb L: 15/29 MS: 1 CopyPart- 00:06:32.518 [2024-12-09 13:13:34.577582] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.518 [2024-12-09 13:13:34.577699] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.518 [2024-12-09 13:13:34.577802] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.518 [2024-12-09 13:13:34.577900] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.518 [2024-12-09 13:13:34.577999] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:32.518 [2024-12-09 13:13:34.578201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.578225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-12-09 13:13:34.578278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.578293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 [2024-12-09 13:13:34.578347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.578361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.518 [2024-12-09 13:13:34.578413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.578427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.518 [2024-12-09 13:13:34.578480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.578494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.518 #20 NEW cov: 12429 ft: 14801 corp: 10/206b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:06:32.518 [2024-12-09 13:13:34.637752] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.518 [2024-12-09 13:13:34.637881] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.518 [2024-12-09 13:13:34.637984] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.518 [2024-12-09 13:13:34.638083] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.518 [2024-12-09 13:13:34.638187] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:32.518 [2024-12-09 13:13:34.638393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.518 [2024-12-09 13:13:34.638419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.638473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.638488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.638541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.638555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.638611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.638625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.638681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.638695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.519 #21 NEW cov: 12429 ft: 14840 corp: 11/236b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:32.519 [2024-12-09 13:13:34.697930] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.519 [2024-12-09 13:13:34.698042] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.698146] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.698249] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.698465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.698494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.698551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.698566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.698616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.698631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.698696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.698710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.519 #22 NEW cov: 12429 ft: 14914 corp: 12/264b lim: 30 exec/s: 0 rss: 73Mb L: 28/30 MS: 1 InsertByte- 00:06:32.519 [2024-12-09 13:13:34.758110] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.519 [2024-12-09 13:13:34.758220] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.758324] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.758424] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.519 [2024-12-09 13:13:34.758522] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:32.519 [2024-12-09 13:13:34.758736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.758762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.758820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.758834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.758888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.758902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.758957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.758971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.519 [2024-12-09 13:13:34.759023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.519 [2024-12-09 13:13:34.759037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.778 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:32.778 #23 NEW cov: 12452 ft: 14994 corp: 13/294b lim: 30 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:06:32.778 [2024-12-09 13:13:34.818141] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.778 [2024-12-09 13:13:34.818251] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.778 [2024-12-09 13:13:34.818468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.818494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.818549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.818562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.778 #24 NEW cov: 12452 ft: 15018 corp: 14/309b lim: 30 exec/s: 0 rss: 73Mb L: 15/30 MS: 1 ChangeBit- 00:06:32.778 [2024-12-09 13:13:34.878389] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.778 [2024-12-09 13:13:34.878501] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.878604] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.878718] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.878917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.878943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.879000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.879015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.879070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.879084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.879139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.879153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.778 #25 NEW cov: 12452 ft: 15024 corp: 15/337b lim: 30 exec/s: 25 rss: 73Mb L: 28/30 MS: 1 CopyPart- 00:06:32.778 [2024-12-09 13:13:34.918491] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.918626] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.918727] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.918827] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:32.778 [2024-12-09 13:13:34.918925] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:32.778 [2024-12-09 13:13:34.919135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.919162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.919219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.919233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.919289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.919306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.919361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.919374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.919429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.919443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.778 #26 NEW cov: 12452 ft: 15065 corp: 16/367b lim: 30 exec/s: 26 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:06:32.778 [2024-12-09 13:13:34.978716] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.778 [2024-12-09 13:13:34.978853] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.978956] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.979059] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.778 [2024-12-09 13:13:34.979262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.979288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.979344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.778 [2024-12-09 13:13:34.979357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.778 [2024-12-09 13:13:34.979412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.779 [2024-12-09 13:13:34.979425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.779 [2024-12-09 13:13:34.979480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.779 [2024-12-09 13:13:34.979494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.779 #27 NEW cov: 12452 ft: 15098 corp: 17/393b lim: 30 exec/s: 27 rss: 73Mb L: 26/30 MS: 1 ChangeByte- 00:06:32.779 [2024-12-09 13:13:35.018708] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.779 [2024-12-09 13:13:35.018823] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.779 [2024-12-09 13:13:35.018925] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:32.779 [2024-12-09 13:13:35.019131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.779 [2024-12-09 13:13:35.019158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.779 [2024-12-09 13:13:35.019214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.779 [2024-12-09 13:13:35.019228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.779 [2024-12-09 13:13:35.019285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.779 [2024-12-09 13:13:35.019303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.040 #28 NEW cov: 12452 ft: 15359 corp: 18/413b lim: 30 exec/s: 28 rss: 73Mb L: 20/30 MS: 1 CopyPart- 00:06:33.040 [2024-12-09 13:13:35.058877] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.040 [2024-12-09 13:13:35.058988] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.040 [2024-12-09 13:13:35.059091] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.040 [2024-12-09 13:13:35.059192] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:06:33.040 [2024-12-09 13:13:35.059292] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:33.040 [2024-12-09 13:13:35.059495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.059522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.059576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.059594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.059649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.059663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.059717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.059731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.059784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:007883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.059798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.040 #29 NEW cov: 12452 ft: 15443 corp: 19/443b lim: 30 exec/s: 29 rss: 73Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:33.040 [2024-12-09 13:13:35.099015] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.040 [2024-12-09 13:13:35.099126] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.040 [2024-12-09 13:13:35.099245] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.040 [2024-12-09 13:13:35.099349] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:33.040 [2024-12-09 13:13:35.099447] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:33.040 [2024-12-09 13:13:35.099658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.099685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.099741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.099756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.099811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.040 [2024-12-09 13:13:35.099828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.040 [2024-12-09 13:13:35.099883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.099897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.041 [2024-12-09 13:13:35.099952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.099966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.041 #30 NEW cov: 12452 ft: 15453 corp: 20/473b lim: 30 exec/s: 30 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:06:33.041 [2024-12-09 13:13:35.159160] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.041 [2024-12-09 13:13:35.159271] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.041 [2024-12-09 13:13:35.159371] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.041 [2024-12-09 13:13:35.159472] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.041 [2024-12-09 13:13:35.159688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.159714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.041 [2024-12-09 13:13:35.159769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff837f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.159782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.041 [2024-12-09 13:13:35.159837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.159850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.041 [2024-12-09 13:13:35.159905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.159920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.041 #31 NEW cov: 12452 ft: 15456 corp: 21/501b lim: 30 exec/s: 31 rss: 73Mb L: 28/30 MS: 1 ChangeBit- 00:06:33.041 [2024-12-09 13:13:35.219240] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.041 [2024-12-09 13:13:35.219369] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.041 [2024-12-09 13:13:35.219570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.219601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.041 [2024-12-09 13:13:35.219657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.219671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.041 #32 NEW cov: 12452 ft: 15488 corp: 22/516b lim: 30 exec/s: 32 rss: 73Mb L: 15/30 MS: 1 ChangeByte- 00:06:33.041 [2024-12-09 13:13:35.259307] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:06:33.041 [2024-12-09 13:13:35.259519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.041 [2024-12-09 13:13:35.259551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.041 #36 NEW cov: 12452 ft: 15521 corp: 23/524b lim: 30 exec/s: 36 rss: 73Mb L: 8/30 MS: 4 ChangeByte-ShuffleBytes-CopyPart-CrossOver- 00:06:33.300 [2024-12-09 13:13:35.299491] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.299608] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.299712] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.299912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.299937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.299992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.300006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.300062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.300076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.300 #37 NEW cov: 12452 ft: 15538 corp: 24/544b lim: 30 exec/s: 37 rss: 73Mb L: 20/30 MS: 1 ChangeByte- 00:06:33.300 [2024-12-09 13:13:35.359678] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.300 [2024-12-09 13:13:35.359803] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.359905] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.360006] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.300 [2024-12-09 13:13:35.360212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.360238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.360294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.360308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.360363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.360377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.360433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.360447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.300 #38 NEW cov: 12452 ft: 15561 corp: 25/569b lim: 30 exec/s: 38 rss: 73Mb L: 25/30 MS: 1 InsertRepeatedBytes- 00:06:33.300 [2024-12-09 13:13:35.399803] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.300 [2024-12-09 13:13:35.399933] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.300 [2024-12-09 13:13:35.400037] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.300 [2024-12-09 13:13:35.400137] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.300 [2024-12-09 13:13:35.400353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff5502ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.400379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.400433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.400448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.400502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.400516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.300 [2024-12-09 13:13:35.400575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.300 [2024-12-09 13:13:35.400594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.300 #39 NEW cov: 12452 ft: 15621 corp: 26/595b lim: 30 exec/s: 39 rss: 73Mb L: 26/30 MS: 1 ChangeByte- 00:06:33.300 [2024-12-09 13:13:35.439883] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.301 [2024-12-09 13:13:35.439994] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.301 [2024-12-09 13:13:35.440100] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.301 [2024-12-09 13:13:35.440206] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.301 [2024-12-09 13:13:35.440405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff0002f9 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.440431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.440489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.440504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.440558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.440572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.440631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.440645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.301 #40 NEW cov: 12452 ft: 15630 corp: 27/621b lim: 30 exec/s: 40 rss: 73Mb L: 26/30 MS: 1 ChangeBinInt- 00:06:33.301 [2024-12-09 13:13:35.500094] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.301 [2024-12-09 13:13:35.500224] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.301 [2024-12-09 13:13:35.500330] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.301 [2024-12-09 13:13:35.500434] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.301 [2024-12-09 13:13:35.500642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff5502ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.500678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.500734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.500748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.500801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.500815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.301 [2024-12-09 13:13:35.500871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffc783ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.301 [2024-12-09 13:13:35.500885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.301 #41 NEW cov: 12452 ft: 15646 corp: 28/647b lim: 30 exec/s: 41 rss: 74Mb L: 26/30 MS: 1 ChangeByte- 00:06:33.560 [2024-12-09 13:13:35.560178] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.560 [2024-12-09 13:13:35.560307] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (123364) > buf size (4096) 00:06:33.560 [2024-12-09 13:13:35.560514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.560540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.560604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78780078 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.560619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.560 #42 NEW cov: 12452 ft: 15662 corp: 29/659b lim: 30 exec/s: 42 rss: 74Mb L: 12/30 MS: 1 EraseBytes- 00:06:33.560 [2024-12-09 13:13:35.600396] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.600527] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.600638] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.600739] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.600840] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:33.560 [2024-12-09 13:13:35.601058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.601083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.601139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8378 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.601153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.601207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.601221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.601279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.601294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.601348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.601362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.560 #43 NEW cov: 12452 ft: 15667 corp: 30/689b lim: 30 exec/s: 43 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:06:33.560 [2024-12-09 13:13:35.640395] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.640633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.640657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 #44 NEW cov: 12452 ft: 15688 corp: 31/696b lim: 30 exec/s: 44 rss: 74Mb L: 7/30 MS: 1 CrossOver- 00:06:33.560 [2024-12-09 13:13:35.680685] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.560 [2024-12-09 13:13:35.680800] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:33.560 [2024-12-09 13:13:35.680908] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.681010] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.681115] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:06:33.560 [2024-12-09 13:13:35.681327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.681354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.681412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00f8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.681427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.681482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.681497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.681554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.681568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.681625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.681639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.560 #45 NEW cov: 12452 ft: 15704 corp: 32/726b lim: 30 exec/s: 45 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:33.560 [2024-12-09 13:13:35.720743] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.560 [2024-12-09 13:13:35.720871] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.720980] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.721088] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.560 [2024-12-09 13:13:35.721189] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:33.560 [2024-12-09 13:13:35.721421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.721447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.721503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.721517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.721570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.721584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.721643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff7883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.721656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.560 [2024-12-09 13:13:35.721711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.721725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.560 #46 NEW cov: 12452 ft: 15718 corp: 33/756b lim: 30 exec/s: 46 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:33.560 [2024-12-09 13:13:35.760749] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:33.560 [2024-12-09 13:13:35.760954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.560 [2024-12-09 13:13:35.760985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.560 #47 NEW cov: 12452 ft: 15767 corp: 34/763b lim: 30 exec/s: 47 rss: 74Mb L: 7/30 MS: 1 ChangeByte- 00:06:33.820 [2024-12-09 13:13:35.821080] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.820 [2024-12-09 13:13:35.821192] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.820 [2024-12-09 13:13:35.821297] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.820 [2024-12-09 13:13:35.821395] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:33.820 [2024-12-09 13:13:35.821600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.821641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.821695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.821709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.821766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.821780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.821837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.821852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.820 #48 NEW cov: 12452 ft: 15776 corp: 35/791b lim: 30 exec/s: 48 rss: 74Mb L: 28/30 MS: 1 ChangeBinInt- 00:06:33.820 [2024-12-09 13:13:35.861164] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.820 [2024-12-09 13:13:35.861294] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.820 [2024-12-09 13:13:35.861401] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.820 [2024-12-09 13:13:35.861505] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.820 [2024-12-09 13:13:35.861720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff5502ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.861746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.861803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.861817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.861871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.861885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.820 [2024-12-09 13:13:35.861940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffc783ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.820 [2024-12-09 13:13:35.861954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.820 #49 NEW cov: 12452 ft: 15786 corp: 36/817b lim: 30 exec/s: 24 rss: 74Mb L: 26/30 MS: 1 CrossOver- 00:06:33.820 #49 DONE cov: 12452 ft: 15786 corp: 36/817b lim: 30 exec/s: 24 rss: 74Mb 00:06:33.820 Done 49 runs in 2 second(s) 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:33.820 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.821 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.821 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.821 13:13:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:33.821 [2024-12-09 13:13:36.053444] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:33.821 [2024-12-09 13:13:36.053526] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295263 ] 00:06:34.080 [2024-12-09 13:13:36.254943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.080 [2024-12-09 13:13:36.288440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.339 [2024-12-09 13:13:36.348050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.339 [2024-12-09 13:13:36.364348] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:34.339 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.339 INFO: Seed: 250706351 00:06:34.339 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:34.339 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:34.339 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:34.339 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.339 #2 INITED exec/s: 0 rss: 66Mb 00:06:34.339 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.339 This may also happen if the target rejected all inputs we tried so far 00:06:34.339 [2024-12-09 13:13:36.441485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.339 [2024-12-09 13:13:36.441528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.339 [2024-12-09 13:13:36.441653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.339 [2024-12-09 13:13:36.441671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.339 [2024-12-09 13:13:36.441791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.339 [2024-12-09 13:13:36.441807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.340 [2024-12-09 13:13:36.441930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.340 [2024-12-09 13:13:36.441949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.599 NEW_FUNC[1/716]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:34.599 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.599 #4 NEW cov: 12135 ft: 12141 corp: 2/36b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:34.599 [2024-12-09 13:13:36.772504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.772547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.599 [2024-12-09 13:13:36.772670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.772696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.599 [2024-12-09 13:13:36.772819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d00048d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.772837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.599 [2024-12-09 13:13:36.772977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.772994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.599 #5 NEW cov: 12265 ft: 12649 corp: 3/71b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:06:34.599 [2024-12-09 13:13:36.841503] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:34.599 [2024-12-09 13:13:36.841683] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:34.599 [2024-12-09 13:13:36.842024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:03d00052 cdw11:0000d000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.842057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.599 [2024-12-09 13:13:36.842171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.842195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.599 [2024-12-09 13:13:36.842316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.599 [2024-12-09 13:13:36.842339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.858 #10 NEW cov: 12287 ft: 13647 corp: 4/97b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 5 InsertByte-CrossOver-CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:34.858 [2024-12-09 13:13:36.891562] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:34.858 [2024-12-09 13:13:36.891726] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:34.858 [2024-12-09 13:13:36.892081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:03d00052 cdw11:0000d000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.892110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.858 [2024-12-09 13:13:36.892235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.892258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.858 [2024-12-09 13:13:36.892374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.892394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.858 #11 NEW cov: 12372 ft: 13875 corp: 5/123b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 ChangeBit- 00:06:34.858 [2024-12-09 13:13:36.962739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.962766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.858 [2024-12-09 13:13:36.962892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.962909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.858 [2024-12-09 13:13:36.963033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.963048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.858 [2024-12-09 13:13:36.963174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:36.963191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.858 #12 NEW cov: 12372 ft: 13942 corp: 6/158b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:34.858 [2024-12-09 13:13:37.011962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01d0000a cdw11:00000900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.858 [2024-12-09 13:13:37.011992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.859 #16 NEW cov: 12372 ft: 14356 corp: 7/165b lim: 35 exec/s: 0 rss: 72Mb L: 7/35 MS: 4 InsertRepeatedBytes-CrossOver-ChangeBinInt-CMP- DE: "\011\000\000\000"- 00:06:34.859 [2024-12-09 13:13:37.063092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.859 [2024-12-09 13:13:37.063118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.859 [2024-12-09 13:13:37.063239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.859 [2024-12-09 13:13:37.063255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.859 [2024-12-09 13:13:37.063371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d0004850 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.859 [2024-12-09 13:13:37.063387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.859 [2024-12-09 13:13:37.063508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.859 [2024-12-09 13:13:37.063524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.118 #17 NEW cov: 12372 ft: 14430 corp: 8/200b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:06:35.118 [2024-12-09 13:13:37.132891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.132917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.133033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.133051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.118 #18 NEW cov: 12372 ft: 14576 corp: 9/222b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:06:35.118 [2024-12-09 13:13:37.183087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.183115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.183238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.183255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.118 #19 NEW cov: 12372 ft: 14606 corp: 10/245b lim: 35 exec/s: 0 rss: 73Mb L: 23/35 MS: 1 InsertByte- 00:06:35.118 [2024-12-09 13:13:37.253593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.253621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.253744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.253762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.253895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:d0000050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.253911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.254040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.254055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.118 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:35.118 #20 NEW cov: 12395 ft: 14641 corp: 11/280b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:35.118 [2024-12-09 13:13:37.323875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.323904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.324020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.324036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.324171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.324189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.118 [2024-12-09 13:13:37.324323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.118 [2024-12-09 13:13:37.324341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.118 #21 NEW cov: 12395 ft: 14673 corp: 12/315b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:35.377 [2024-12-09 13:13:37.373056] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.377 [2024-12-09 13:13:37.373225] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.377 [2024-12-09 13:13:37.373560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01d0000a cdw11:00000900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.377 [2024-12-09 13:13:37.373591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.377 [2024-12-09 13:13:37.373716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.377 [2024-12-09 13:13:37.373735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.377 [2024-12-09 13:13:37.373855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.373878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.378 #22 NEW cov: 12395 ft: 14685 corp: 13/338b lim: 35 exec/s: 22 rss: 73Mb L: 23/35 MS: 1 InsertRepeatedBytes- 00:06:35.378 [2024-12-09 13:13:37.444295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.444323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.444444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.444461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.444590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.444608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.444738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.444756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.378 #23 NEW cov: 12395 ft: 14694 corp: 14/373b lim: 35 exec/s: 23 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:35.378 [2024-12-09 13:13:37.514511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.514539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.514660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.514676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.514797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d0004844 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.514813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.514936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.514952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.378 #24 NEW cov: 12395 ft: 14705 corp: 15/408b lim: 35 exec/s: 24 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:35.378 [2024-12-09 13:13:37.564591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:2f00d030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.564617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.564766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2f2f002f cdw11:d0002f39 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.564783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.564905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.564922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.565055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.565073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.378 #25 NEW cov: 12395 ft: 14753 corp: 16/443b lim: 35 exec/s: 25 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:35.378 [2024-12-09 13:13:37.614725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.614752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.614894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.614913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.615040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c9d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.615067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.378 [2024-12-09 13:13:37.615189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.378 [2024-12-09 13:13:37.615207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.637 #26 NEW cov: 12395 ft: 14807 corp: 17/478b lim: 35 exec/s: 26 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:35.637 [2024-12-09 13:13:37.684972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:02d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.685000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.685127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.685142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.685262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.685281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.685410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.685426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.637 #27 NEW cov: 12395 ft: 14815 corp: 18/513b lim: 35 exec/s: 27 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:35.637 [2024-12-09 13:13:37.734324] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.637 [2024-12-09 13:13:37.734501] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.637 [2024-12-09 13:13:37.734845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0900ff cdw11:c600273c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.734875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.735001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:520300b8 cdw11:0000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.735017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.735139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.735164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.735286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.735310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.637 #33 NEW cov: 12395 ft: 14863 corp: 19/547b lim: 35 exec/s: 33 rss: 73Mb L: 34/35 MS: 1 CMP- DE: "\377\377\011'<\306\201\270"- 00:06:35.637 [2024-12-09 13:13:37.805374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0c100d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.805400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.805528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.805545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.805678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d0004844 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.805695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.805823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.805840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.637 #34 NEW cov: 12395 ft: 14928 corp: 20/582b lim: 35 exec/s: 34 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:35.637 [2024-12-09 13:13:37.875599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.875627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.875753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.637 [2024-12-09 13:13:37.875773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.637 [2024-12-09 13:13:37.875903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.638 [2024-12-09 13:13:37.875921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.638 [2024-12-09 13:13:37.876048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.638 [2024-12-09 13:13:37.876065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.897 #35 NEW cov: 12395 ft: 14955 corp: 21/617b lim: 35 exec/s: 35 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:35.897 [2024-12-09 13:13:37.925788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:02d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.925818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:37.925946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.925966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:37.926095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.926112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:37.926246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.926261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.897 #36 NEW cov: 12395 ft: 14967 corp: 22/652b lim: 35 exec/s: 36 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:35.897 [2024-12-09 13:13:37.994906] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.897 [2024-12-09 13:13:37.995067] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:35.897 [2024-12-09 13:13:37.995416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:03d00052 cdw11:0000d000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.995444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:37.995561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.995584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:37.995723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:37.995742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.897 #37 NEW cov: 12395 ft: 14971 corp: 23/678b lim: 35 exec/s: 37 rss: 73Mb L: 26/35 MS: 1 ChangeBit- 00:06:35.897 [2024-12-09 13:13:38.046141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.046173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.046292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d07e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.046309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.046435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c9d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.046452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.046582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.046602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.897 #38 NEW cov: 12395 ft: 15049 corp: 24/713b lim: 35 exec/s: 38 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:06:35.897 [2024-12-09 13:13:38.106198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.106227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.106353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.106370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.106491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c9d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.106510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.897 [2024-12-09 13:13:38.106621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.897 [2024-12-09 13:13:38.106638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.897 #39 NEW cov: 12395 ft: 15082 corp: 25/748b lim: 35 exec/s: 39 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:36.156 [2024-12-09 13:13:38.155950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.155978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.156114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.156131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.156 #40 NEW cov: 12395 ft: 15190 corp: 26/769b lim: 35 exec/s: 40 rss: 74Mb L: 21/35 MS: 1 EraseBytes- 00:06:36.156 [2024-12-09 13:13:38.226646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:02d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.226674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.226799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.226817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.226944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.226961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.227080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.227097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.156 #41 NEW cov: 12395 ft: 15195 corp: 27/804b lim: 35 exec/s: 41 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:06:36.156 [2024-12-09 13:13:38.296757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.296784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.296903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2f2f002b cdw11:d0002fd0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.296919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.297047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.297063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.297186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:d0d000d0 cdw11:d000d0d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.297204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.156 #42 NEW cov: 12395 ft: 15252 corp: 28/839b lim: 35 exec/s: 42 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:36.156 [2024-12-09 13:13:38.345965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:03d0000a cdw11:00000900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.345991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.156 #43 NEW cov: 12395 ft: 15273 corp: 29/846b lim: 35 exec/s: 43 rss: 74Mb L: 7/35 MS: 1 ChangeBit- 00:06:36.156 [2024-12-09 13:13:38.396092] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:36.156 [2024-12-09 13:13:38.396262] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:36.156 [2024-12-09 13:13:38.396663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.396698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.156 [2024-12-09 13:13:38.396821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.156 [2024-12-09 13:13:38.396844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.416 NEW_FUNC[1/3]: 0x1310078 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3180 00:06:36.416 NEW_FUNC[2/3]: 0x13109b8 in nvmf_ctrlr_identify_iocs_nvm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3136 00:06:36.416 #44 NEW cov: 12434 ft: 15345 corp: 30/872b lim: 35 exec/s: 22 rss: 74Mb L: 26/35 MS: 1 ChangeByte- 00:06:36.416 #44 DONE cov: 12434 ft: 15345 corp: 30/872b lim: 35 exec/s: 22 rss: 74Mb 00:06:36.416 ###### Recommended dictionary. ###### 00:06:36.416 "\011\000\000\000" # Uses: 3 00:06:36.416 "\377\377\011'<\306\201\270" # Uses: 0 00:06:36.416 ###### End of recommended dictionary. ###### 00:06:36.416 Done 44 runs in 2 second(s) 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.416 13:13:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:36.416 [2024-12-09 13:13:38.598156] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:36.416 [2024-12-09 13:13:38.598222] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295796 ] 00:06:36.675 [2024-12-09 13:13:38.802722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.675 [2024-12-09 13:13:38.836248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.675 [2024-12-09 13:13:38.895810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.675 [2024-12-09 13:13:38.912108] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:36.935 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.935 INFO: Seed: 2796694300 00:06:36.935 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:36.935 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:36.935 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.935 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.935 #2 INITED exec/s: 0 rss: 66Mb 00:06:36.935 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.936 This may also happen if the target rejected all inputs we tried so far 00:06:37.195 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:37.195 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.195 #12 NEW cov: 12079 ft: 12080 corp: 2/17b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 5 ShuffleBytes-ChangeBit-CrossOver-EraseBytes-InsertRepeatedBytes- 00:06:37.195 #23 NEW cov: 12192 ft: 12777 corp: 3/33b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 ShuffleBytes- 00:06:37.195 #24 NEW cov: 12198 ft: 13442 corp: 4/39b lim: 20 exec/s: 0 rss: 72Mb L: 6/16 MS: 1 InsertRepeatedBytes- 00:06:37.454 #27 NEW cov: 12283 ft: 13726 corp: 5/56b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:06:37.454 #28 NEW cov: 12288 ft: 14057 corp: 6/64b lim: 20 exec/s: 0 rss: 72Mb L: 8/17 MS: 1 EraseBytes- 00:06:37.454 #29 NEW cov: 12288 ft: 14089 corp: 7/80b lim: 20 exec/s: 0 rss: 72Mb L: 16/17 MS: 1 InsertRepeatedBytes- 00:06:37.454 #30 NEW cov: 12288 ft: 14165 corp: 8/96b lim: 20 exec/s: 0 rss: 72Mb L: 16/17 MS: 1 CrossOver- 00:06:37.713 #36 NEW cov: 12288 ft: 14175 corp: 9/112b lim: 20 exec/s: 0 rss: 72Mb L: 16/17 MS: 1 ChangeBit- 00:06:37.713 #37 NEW cov: 12288 ft: 14224 corp: 10/120b lim: 20 exec/s: 0 rss: 72Mb L: 8/17 MS: 1 ChangeBinInt- 00:06:37.713 #38 NEW cov: 12288 ft: 14282 corp: 11/136b lim: 20 exec/s: 0 rss: 72Mb L: 16/17 MS: 1 ChangeBit- 00:06:37.713 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:37.713 #39 NEW cov: 12311 ft: 14343 corp: 12/153b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 ChangeBit- 00:06:37.713 #40 NEW cov: 12311 ft: 14397 corp: 13/169b lim: 20 exec/s: 0 rss: 73Mb L: 16/17 MS: 1 ChangeBit- 00:06:37.973 #41 NEW cov: 12315 ft: 14536 corp: 14/181b lim: 20 exec/s: 41 rss: 73Mb L: 12/17 MS: 1 CrossOver- 00:06:37.973 #42 NEW cov: 12315 ft: 14591 corp: 15/188b lim: 20 exec/s: 42 rss: 73Mb L: 7/17 MS: 1 CrossOver- 00:06:37.973 #43 NEW cov: 12315 ft: 14597 corp: 16/204b lim: 20 exec/s: 43 rss: 73Mb L: 16/17 MS: 1 CopyPart- 00:06:37.973 [2024-12-09 13:13:40.145835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:37.973 [2024-12-09 13:13:40.145878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.973 NEW_FUNC[1/20]: 0x137bee8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3485 00:06:37.973 NEW_FUNC[2/20]: 0x137ca68 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3427 00:06:37.973 #46 NEW cov: 12641 ft: 14959 corp: 17/219b lim: 20 exec/s: 46 rss: 73Mb L: 15/17 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:38.232 #47 NEW cov: 12641 ft: 15026 corp: 18/226b lim: 20 exec/s: 47 rss: 73Mb L: 7/17 MS: 1 ChangeBit- 00:06:38.232 #48 NEW cov: 12641 ft: 15047 corp: 19/244b lim: 20 exec/s: 48 rss: 73Mb L: 18/18 MS: 1 CopyPart- 00:06:38.232 #49 NEW cov: 12641 ft: 15055 corp: 20/251b lim: 20 exec/s: 49 rss: 73Mb L: 7/18 MS: 1 ShuffleBytes- 00:06:38.232 #50 NEW cov: 12641 ft: 15115 corp: 21/271b lim: 20 exec/s: 50 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:38.232 #51 NEW cov: 12641 ft: 15143 corp: 22/287b lim: 20 exec/s: 51 rss: 73Mb L: 16/20 MS: 1 InsertRepeatedBytes- 00:06:38.491 #52 NEW cov: 12641 ft: 15164 corp: 23/294b lim: 20 exec/s: 52 rss: 73Mb L: 7/20 MS: 1 CMP- DE: "\000\000"- 00:06:38.491 #53 NEW cov: 12641 ft: 15199 corp: 24/314b lim: 20 exec/s: 53 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:38.491 #54 NEW cov: 12641 ft: 15205 corp: 25/332b lim: 20 exec/s: 54 rss: 73Mb L: 18/20 MS: 1 InsertByte- 00:06:38.491 #55 NEW cov: 12641 ft: 15217 corp: 26/338b lim: 20 exec/s: 55 rss: 73Mb L: 6/20 MS: 1 EraseBytes- 00:06:38.750 #56 NEW cov: 12641 ft: 15222 corp: 27/354b lim: 20 exec/s: 56 rss: 73Mb L: 16/20 MS: 1 ChangeBit- 00:06:38.751 #57 NEW cov: 12644 ft: 15321 corp: 28/370b lim: 20 exec/s: 57 rss: 73Mb L: 16/20 MS: 1 PersAutoDict- DE: "\000\000"- 00:06:38.751 #61 NEW cov: 12644 ft: 15323 corp: 29/374b lim: 20 exec/s: 61 rss: 73Mb L: 4/20 MS: 4 ShuffleBytes-ChangeBit-InsertByte-CopyPart- 00:06:38.751 #62 NEW cov: 12644 ft: 15324 corp: 30/391b lim: 20 exec/s: 62 rss: 73Mb L: 17/20 MS: 1 InsertByte- 00:06:38.751 #63 NEW cov: 12644 ft: 15333 corp: 31/407b lim: 20 exec/s: 31 rss: 73Mb L: 16/20 MS: 1 PersAutoDict- DE: "\000\000"- 00:06:38.751 #63 DONE cov: 12644 ft: 15333 corp: 31/407b lim: 20 exec/s: 31 rss: 73Mb 00:06:38.751 ###### Recommended dictionary. ###### 00:06:38.751 "\000\000" # Uses: 2 00:06:38.751 ###### End of recommended dictionary. ###### 00:06:38.751 Done 63 runs in 2 second(s) 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.010 13:13:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:39.010 [2024-12-09 13:13:41.134652] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:39.010 [2024-12-09 13:13:41.134720] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296259 ] 00:06:39.269 [2024-12-09 13:13:41.346579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.269 [2024-12-09 13:13:41.379803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.269 [2024-12-09 13:13:41.438569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.269 [2024-12-09 13:13:41.454903] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:39.269 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.269 INFO: Seed: 1045750558 00:06:39.269 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:39.269 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:39.270 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:39.270 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.270 #2 INITED exec/s: 0 rss: 66Mb 00:06:39.270 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.270 This may also happen if the target rejected all inputs we tried so far 00:06:39.529 [2024-12-09 13:13:41.521962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.529 [2024-12-09 13:13:41.522002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.529 [2024-12-09 13:13:41.522142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.529 [2024-12-09 13:13:41.522158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.529 [2024-12-09 13:13:41.522286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.529 [2024-12-09 13:13:41.522302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.529 [2024-12-09 13:13:41.522420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.529 [2024-12-09 13:13:41.522436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.789 NEW_FUNC[1/717]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:39.789 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.789 #7 NEW cov: 12180 ft: 12180 corp: 2/30b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 5 CopyPart-ChangeBit-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:39.789 [2024-12-09 13:13:41.852323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff06ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.852364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.852492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.852510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.789 #19 NEW cov: 12293 ft: 13160 corp: 3/44b lim: 35 exec/s: 0 rss: 72Mb L: 14/29 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:39.789 [2024-12-09 13:13:41.902930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.902959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.903080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.903096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.903222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00001f00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.903241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.903365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.903381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.789 #20 NEW cov: 12299 ft: 13377 corp: 4/77b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CMP- DE: "\037\000\000\000"- 00:06:39.789 [2024-12-09 13:13:41.973106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.973136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.973264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.973279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.973410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00001f00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.973426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.789 [2024-12-09 13:13:41.973550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff0affff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.789 [2024-12-09 13:13:41.973566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.789 #21 NEW cov: 12384 ft: 13607 corp: 5/110b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:40.049 [2024-12-09 13:13:42.043623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.043650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.043783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.043800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.043914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1f00ffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.043932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.044058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff0a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.044075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.044199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff5a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.044216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.049 #22 NEW cov: 12384 ft: 13930 corp: 6/145b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:06:40.049 [2024-12-09 13:13:42.113576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.113608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.113744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff40003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.113761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.113887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.113906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.114032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.114047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.049 #23 NEW cov: 12384 ft: 14016 corp: 7/175b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 InsertByte- 00:06:40.049 [2024-12-09 13:13:42.163693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.163719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.163840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.163855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.163979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00001f00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.163995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.164123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff0affff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.164138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.049 #24 NEW cov: 12384 ft: 14057 corp: 8/208b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ChangeBit- 00:06:40.049 [2024-12-09 13:13:42.213578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.213608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.213741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.213758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.049 [2024-12-09 13:13:42.213884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.049 [2024-12-09 13:13:42.213901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.050 #30 NEW cov: 12384 ft: 14300 corp: 9/234b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 EraseBytes- 00:06:40.050 [2024-12-09 13:13:42.263699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.050 [2024-12-09 13:13:42.263726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.050 [2024-12-09 13:13:42.263853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.050 [2024-12-09 13:13:42.263871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.050 [2024-12-09 13:13:42.263997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.050 [2024-12-09 13:13:42.264018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.309 #31 NEW cov: 12384 ft: 14320 corp: 10/260b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 ChangeBit- 00:06:40.309 [2024-12-09 13:13:42.334241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.334271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.334393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff40003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.334410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.334530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.334547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.334677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.334693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.309 #32 NEW cov: 12384 ft: 14351 corp: 11/290b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 ShuffleBytes- 00:06:40.309 [2024-12-09 13:13:42.404694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.404721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.404858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.404873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.405004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1f00ffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.405021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.405153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff0a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.405170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.405296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff5a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.405313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.309 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:40.309 #33 NEW cov: 12407 ft: 14379 corp: 12/325b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:40.309 [2024-12-09 13:13:42.474363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.474390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.309 [2024-12-09 13:13:42.474529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:7fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.309 [2024-12-09 13:13:42.474549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.310 [2024-12-09 13:13:42.474682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.310 [2024-12-09 13:13:42.474700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.310 #34 NEW cov: 12407 ft: 14433 corp: 13/351b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 ChangeBit- 00:06:40.310 [2024-12-09 13:13:42.524788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.310 [2024-12-09 13:13:42.524814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.310 [2024-12-09 13:13:42.524940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff28ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.310 [2024-12-09 13:13:42.524955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.310 [2024-12-09 13:13:42.525081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ff1f cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.310 [2024-12-09 13:13:42.525097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.310 [2024-12-09 13:13:42.525215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.310 [2024-12-09 13:13:42.525232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.310 #35 NEW cov: 12407 ft: 14516 corp: 14/385b lim: 35 exec/s: 35 rss: 73Mb L: 34/35 MS: 1 InsertByte- 00:06:40.569 [2024-12-09 13:13:42.574958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.574986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.575127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.575144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.575270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.575287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.575405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.575420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.569 #36 NEW cov: 12407 ft: 14547 corp: 15/414b lim: 35 exec/s: 36 rss: 73Mb L: 29/35 MS: 1 CopyPart- 00:06:40.569 [2024-12-09 13:13:42.625159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.625185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.625311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.625329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.625458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.625474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.625599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.625613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.569 #37 NEW cov: 12407 ft: 14553 corp: 16/443b lim: 35 exec/s: 37 rss: 73Mb L: 29/35 MS: 1 EraseBytes- 00:06:40.569 [2024-12-09 13:13:42.695304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.695331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.695471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:7fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.695490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.695625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.695642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.569 [2024-12-09 13:13:42.695764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.695781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.569 #38 NEW cov: 12407 ft: 14578 corp: 17/473b lim: 35 exec/s: 38 rss: 73Mb L: 30/35 MS: 1 CopyPart- 00:06:40.569 [2024-12-09 13:13:42.764673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff06ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.569 [2024-12-09 13:13:42.764701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.569 #39 NEW cov: 12407 ft: 15282 corp: 18/483b lim: 35 exec/s: 39 rss: 73Mb L: 10/35 MS: 1 EraseBytes- 00:06:40.827 [2024-12-09 13:13:42.834797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff06ff cdw11:1f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:42.834824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.827 #45 NEW cov: 12407 ft: 15286 corp: 19/493b lim: 35 exec/s: 45 rss: 73Mb L: 10/35 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:06:40.827 [2024-12-09 13:13:42.905050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:07ff0601 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:42.905079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.827 #46 NEW cov: 12407 ft: 15322 corp: 20/503b lim: 35 exec/s: 46 rss: 73Mb L: 10/35 MS: 1 ChangeBinInt- 00:06:40.827 [2024-12-09 13:13:42.955117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff06ff cdw11:ff170003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:42.955149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.827 #47 NEW cov: 12407 ft: 15330 corp: 21/514b lim: 35 exec/s: 47 rss: 73Mb L: 11/35 MS: 1 InsertByte- 00:06:40.827 [2024-12-09 13:13:43.006112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:43.006137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.827 [2024-12-09 13:13:43.006269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff28ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:43.006287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.827 [2024-12-09 13:13:43.006411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ff1f cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:43.006429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.827 [2024-12-09 13:13:43.006549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.827 [2024-12-09 13:13:43.006565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.827 #48 NEW cov: 12407 ft: 15359 corp: 22/548b lim: 35 exec/s: 48 rss: 74Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:41.086 [2024-12-09 13:13:43.076392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.076420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.076544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:7fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.076562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.076696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.076713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.076837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.076853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.086 #49 NEW cov: 12407 ft: 15375 corp: 23/578b lim: 35 exec/s: 49 rss: 74Mb L: 30/35 MS: 1 ChangeBinInt- 00:06:41.086 [2024-12-09 13:13:43.146616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.146645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.146781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff2228ff cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.146798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.146923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ff1f cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.146942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.147061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.147077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.086 #50 NEW cov: 12407 ft: 15386 corp: 24/612b lim: 35 exec/s: 50 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:41.086 [2024-12-09 13:13:43.217061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:1f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.217087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.217225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.217240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.217369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.217385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.217513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff5a0a cdw11:ff0a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.217530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.086 [2024-12-09 13:13:43.217665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff5a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.086 [2024-12-09 13:13:43.217681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.087 #51 NEW cov: 12407 ft: 15392 corp: 25/647b lim: 35 exec/s: 51 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:41.087 [2024-12-09 13:13:43.266067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff06ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.087 [2024-12-09 13:13:43.266094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.087 #52 NEW cov: 12407 ft: 15399 corp: 26/658b lim: 35 exec/s: 52 rss: 74Mb L: 11/35 MS: 1 ShuffleBytes- 00:06:41.346 [2024-12-09 13:13:43.337199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.337227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.337353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff40003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.337370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.337500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.337519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.337649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.337670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.346 #53 NEW cov: 12407 ft: 15419 corp: 27/688b lim: 35 exec/s: 53 rss: 74Mb L: 30/35 MS: 1 ChangeBit- 00:06:41.346 [2024-12-09 13:13:43.406615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1f0006ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.406640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.346 #54 NEW cov: 12407 ft: 15434 corp: 28/698b lim: 35 exec/s: 54 rss: 74Mb L: 10/35 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:06:41.346 [2024-12-09 13:13:43.477528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.477555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.477702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.477720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.477850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.477866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.346 [2024-12-09 13:13:43.477990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.346 [2024-12-09 13:13:43.478005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.346 #55 NEW cov: 12407 ft: 15451 corp: 29/728b lim: 35 exec/s: 27 rss: 74Mb L: 30/35 MS: 1 InsertByte- 00:06:41.346 #55 DONE cov: 12407 ft: 15451 corp: 29/728b lim: 35 exec/s: 27 rss: 74Mb 00:06:41.346 ###### Recommended dictionary. ###### 00:06:41.346 "\037\000\000\000" # Uses: 4 00:06:41.346 ###### End of recommended dictionary. ###### 00:06:41.346 Done 55 runs in 2 second(s) 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.606 13:13:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:41.606 [2024-12-09 13:13:43.649103] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:41.606 [2024-12-09 13:13:43.649179] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296616 ] 00:06:41.874 [2024-12-09 13:13:43.851569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.874 [2024-12-09 13:13:43.890462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.874 [2024-12-09 13:13:43.949997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.874 [2024-12-09 13:13:43.966261] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:41.874 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.874 INFO: Seed: 3557736838 00:06:41.874 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:41.874 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:41.874 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.874 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.874 #2 INITED exec/s: 0 rss: 66Mb 00:06:41.874 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.874 This may also happen if the target rejected all inputs we tried so far 00:06:41.874 [2024-12-09 13:13:44.021751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f0f00af0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.874 [2024-12-09 13:13:44.021782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.874 [2024-12-09 13:13:44.021837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.874 [2024-12-09 13:13:44.021852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.137 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:42.137 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.137 #5 NEW cov: 12191 ft: 12190 corp: 2/22b lim: 45 exec/s: 0 rss: 72Mb L: 21/21 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:42.137 [2024-12-09 13:13:44.362792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.137 [2024-12-09 13:13:44.362844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.137 [2024-12-09 13:13:44.362919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.137 [2024-12-09 13:13:44.362942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 #8 NEW cov: 12304 ft: 12844 corp: 3/42b lim: 45 exec/s: 0 rss: 72Mb L: 20/21 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:06:42.403 [2024-12-09 13:13:44.412921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-12-09 13:13:44.412952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-12-09 13:13:44.413007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f0f0ff0a cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-12-09 13:13:44.413021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 [2024-12-09 13:13:44.413074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-12-09 13:13:44.413089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.403 #9 NEW cov: 12310 ft: 13390 corp: 4/73b lim: 45 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 CrossOver- 00:06:42.403 [2024-12-09 13:13:44.472944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:2aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-12-09 13:13:44.472970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-12-09 13:13:44.473027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-12-09 13:13:44.473040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.404 #10 NEW cov: 12395 ft: 13578 corp: 5/93b lim: 45 exec/s: 0 rss: 72Mb L: 20/31 MS: 1 ChangeByte- 00:06:42.404 [2024-12-09 13:13:44.533061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffb50aff cdw11:b5b50005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.533086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.404 [2024-12-09 13:13:44.533141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:b5b5b5b5 cdw11:b5b50005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.533155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.404 #12 NEW cov: 12395 ft: 13743 corp: 6/115b lim: 45 exec/s: 0 rss: 72Mb L: 22/31 MS: 2 CMP-InsertRepeatedBytes- DE: "\377\377\377\000"- 00:06:42.404 [2024-12-09 13:13:44.573337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.573362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.404 [2024-12-09 13:13:44.573418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.573431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.404 [2024-12-09 13:13:44.573487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.573502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.404 #13 NEW cov: 12395 ft: 13918 corp: 7/143b lim: 45 exec/s: 0 rss: 73Mb L: 28/31 MS: 1 CopyPart- 00:06:42.404 [2024-12-09 13:13:44.613494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.613519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.404 [2024-12-09 13:13:44.613580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.613599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.404 [2024-12-09 13:13:44.613653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.404 [2024-12-09 13:13:44.613677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 #14 NEW cov: 12395 ft: 14037 corp: 8/174b lim: 45 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 CopyPart- 00:06:42.684 [2024-12-09 13:13:44.673622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.673647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.673700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.673713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.673767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.673781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 #15 NEW cov: 12395 ft: 14103 corp: 9/206b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertByte- 00:06:42.684 [2024-12-09 13:13:44.733765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.733791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.733848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.733861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.733916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.733931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 #16 NEW cov: 12395 ft: 14134 corp: 10/238b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:06:42.684 [2024-12-09 13:13:44.793954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.793979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.794035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.794049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.794104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.794119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 #17 NEW cov: 12395 ft: 14194 corp: 11/270b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:06:42.684 [2024-12-09 13:13:44.854083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27ff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.854108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.854166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.854180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.854236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.854250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 #18 NEW cov: 12395 ft: 14240 corp: 12/302b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:06:42.684 [2024-12-09 13:13:44.894184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0a0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.894209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.894267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.894281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.684 [2024-12-09 13:13:44.894337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.684 [2024-12-09 13:13:44.894351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.684 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:42.684 #19 NEW cov: 12418 ft: 14300 corp: 13/330b lim: 45 exec/s: 0 rss: 73Mb L: 28/32 MS: 1 CopyPart- 00:06:43.024 [2024-12-09 13:13:44.934311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.934335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:44.934393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.934408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:44.934464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.934478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 #20 NEW cov: 12418 ft: 14310 corp: 14/358b lim: 45 exec/s: 0 rss: 73Mb L: 28/32 MS: 1 ChangeBit- 00:06:43.024 [2024-12-09 13:13:44.974593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:2aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.974618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:44.974674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.974694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:44.974746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.974760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:44.974814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fffffdff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:44.974828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.024 #21 NEW cov: 12418 ft: 14655 corp: 15/399b lim: 45 exec/s: 21 rss: 73Mb L: 41/41 MS: 1 CrossOver- 00:06:43.024 [2024-12-09 13:13:45.034577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.034606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.034662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.034675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.034730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.034744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 #22 NEW cov: 12418 ft: 14666 corp: 16/434b lim: 45 exec/s: 22 rss: 73Mb L: 35/41 MS: 1 InsertRepeatedBytes- 00:06:43.024 [2024-12-09 13:13:45.074689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff23ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.074714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.074772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0af0ffff cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.074785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.074841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.074855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 #23 NEW cov: 12418 ft: 14681 corp: 17/466b lim: 45 exec/s: 23 rss: 73Mb L: 32/41 MS: 1 InsertByte- 00:06:43.024 [2024-12-09 13:13:45.135044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff23ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.135070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.135124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0af0ffff cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.135139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.135196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.135213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.135267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fffff0f0 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.135281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.024 #24 NEW cov: 12418 ft: 14702 corp: 18/504b lim: 45 exec/s: 24 rss: 73Mb L: 38/41 MS: 1 InsertRepeatedBytes- 00:06:43.024 [2024-12-09 13:13:45.195190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff39ffff cdw11:d1590004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.195215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.195273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0af00000 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.195287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.195342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.195356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.024 [2024-12-09 13:13:45.195409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fffff0f0 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.024 [2024-12-09 13:13:45.195422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.305 #25 NEW cov: 12418 ft: 14718 corp: 19/542b lim: 45 exec/s: 25 rss: 74Mb L: 38/41 MS: 1 CMP- DE: "9\321Y\223+\012\000\000"- 00:06:43.305 [2024-12-09 13:13:45.255217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.255243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.255301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff312aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.255314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.255370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.255384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 #26 NEW cov: 12418 ft: 14725 corp: 20/574b lim: 45 exec/s: 26 rss: 74Mb L: 32/41 MS: 1 ChangeByte- 00:06:43.305 [2024-12-09 13:13:45.295315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27ff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.295341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.295398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.295412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.295468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.295485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 #27 NEW cov: 12418 ft: 14739 corp: 21/606b lim: 45 exec/s: 27 rss: 74Mb L: 32/41 MS: 1 CrossOver- 00:06:43.305 [2024-12-09 13:13:45.355470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.355497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.355553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f0f0ff0a cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.355568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.355623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.355637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 #28 NEW cov: 12418 ft: 14752 corp: 22/637b lim: 45 exec/s: 28 rss: 74Mb L: 31/41 MS: 1 ChangeBit- 00:06:43.305 [2024-12-09 13:13:45.395598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.395624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.395683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.395698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.395756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.395769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 #29 NEW cov: 12418 ft: 14789 corp: 23/665b lim: 45 exec/s: 29 rss: 74Mb L: 28/41 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:06:43.305 [2024-12-09 13:13:45.436042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.436067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.436125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f0f0ff0a cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.436140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.436196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3737f037 cdw11:37370001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.436210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.436264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:37373737 cdw11:37370007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.436278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.436335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.436352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.305 #30 NEW cov: 12418 ft: 14901 corp: 24/710b lim: 45 exec/s: 30 rss: 74Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:43.305 [2024-12-09 13:13:45.475843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.475869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.475928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.475942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.475997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0a0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.476010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.305 #31 NEW cov: 12418 ft: 14915 corp: 25/738b lim: 45 exec/s: 31 rss: 74Mb L: 28/45 MS: 1 CopyPart- 00:06:43.305 [2024-12-09 13:13:45.515999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.516024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.516083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.305 [2024-12-09 13:13:45.516097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.305 [2024-12-09 13:13:45.516153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.306 [2024-12-09 13:13:45.516167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.572 #32 NEW cov: 12418 ft: 14924 corp: 26/773b lim: 45 exec/s: 32 rss: 74Mb L: 35/45 MS: 1 ShuffleBytes- 00:06:43.572 [2024-12-09 13:13:45.576129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.576155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.572 [2024-12-09 13:13:45.576213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.576228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.572 [2024-12-09 13:13:45.576284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.576298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.572 #33 NEW cov: 12418 ft: 14969 corp: 27/805b lim: 45 exec/s: 33 rss: 74Mb L: 32/45 MS: 1 InsertRepeatedBytes- 00:06:43.572 [2024-12-09 13:13:45.636468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:2aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.636493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.572 [2024-12-09 13:13:45.636553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.636567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.572 [2024-12-09 13:13:45.636627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.636641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.572 [2024-12-09 13:13:45.636698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fffffdff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.572 [2024-12-09 13:13:45.636711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.572 #34 NEW cov: 12418 ft: 14978 corp: 28/846b lim: 45 exec/s: 34 rss: 74Mb L: 41/45 MS: 1 ChangeBinInt- 00:06:43.573 [2024-12-09 13:13:45.696635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.696661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.696718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.696733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.696785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2b0a5993 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.696800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.696854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.696867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.573 #35 NEW cov: 12418 ft: 14998 corp: 29/886b lim: 45 exec/s: 35 rss: 74Mb L: 40/45 MS: 1 PersAutoDict- DE: "9\321Y\223+\012\000\000"- 00:06:43.573 [2024-12-09 13:13:45.756784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.756810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.756869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff31ffff cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.756882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.756936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:bfbfbfbf cdw11:bfbf0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.756949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.757003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.757017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.573 #36 NEW cov: 12418 ft: 15027 corp: 30/930b lim: 45 exec/s: 36 rss: 74Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:06:43.573 [2024-12-09 13:13:45.796749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.796774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.796830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.796844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.573 [2024-12-09 13:13:45.796897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.573 [2024-12-09 13:13:45.796912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.853 #37 NEW cov: 12418 ft: 15069 corp: 31/960b lim: 45 exec/s: 37 rss: 74Mb L: 30/45 MS: 1 EraseBytes- 00:06:43.853 [2024-12-09 13:13:45.837001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff39ffff cdw11:d1590004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.837026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.837083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0af00000 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.837098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.837153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.837167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.837224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:dffff0f0 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.837238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.853 #38 NEW cov: 12418 ft: 15078 corp: 32/999b lim: 45 exec/s: 38 rss: 74Mb L: 39/45 MS: 1 InsertByte- 00:06:43.853 [2024-12-09 13:13:45.897075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.897100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.897158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff30ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.897173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.897230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.897244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.853 #39 NEW cov: 12418 ft: 15083 corp: 33/1031b lim: 45 exec/s: 39 rss: 74Mb L: 32/45 MS: 1 ChangeASCIIInt- 00:06:43.853 [2024-12-09 13:13:45.937171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.937196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.937258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f0f0ff0a cdw11:f0f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.937272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.937328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f0f0f0f0 cdw11:f4f00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.937342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.853 #40 NEW cov: 12418 ft: 15096 corp: 34/1062b lim: 45 exec/s: 40 rss: 74Mb L: 31/45 MS: 1 ChangeBit- 00:06:43.853 [2024-12-09 13:13:45.997159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.997184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.853 [2024-12-09 13:13:45.997240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.853 [2024-12-09 13:13:45.997254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.853 #41 NEW cov: 12418 ft: 15105 corp: 35/1082b lim: 45 exec/s: 20 rss: 74Mb L: 20/45 MS: 1 ChangeBinInt- 00:06:43.853 #41 DONE cov: 12418 ft: 15105 corp: 35/1082b lim: 45 exec/s: 20 rss: 74Mb 00:06:43.853 ###### Recommended dictionary. ###### 00:06:43.853 "\377\377\377\000" # Uses: 2 00:06:43.853 "9\321Y\223+\012\000\000" # Uses: 1 00:06:43.853 ###### End of recommended dictionary. ###### 00:06:43.853 Done 41 runs in 2 second(s) 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.139 13:13:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:44.139 [2024-12-09 13:13:46.165323] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:44.139 [2024-12-09 13:13:46.165388] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297160 ] 00:06:44.139 [2024-12-09 13:13:46.361835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.424 [2024-12-09 13:13:46.396765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.424 [2024-12-09 13:13:46.455571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.424 [2024-12-09 13:13:46.471882] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:44.424 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.424 INFO: Seed: 1767765462 00:06:44.424 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:44.424 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:44.424 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:44.424 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.424 #2 INITED exec/s: 0 rss: 65Mb 00:06:44.424 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.424 This may also happen if the target rejected all inputs we tried so far 00:06:44.425 [2024-12-09 13:13:46.538814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.425 [2024-12-09 13:13:46.538853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.425 [2024-12-09 13:13:46.538968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.425 [2024-12-09 13:13:46.538985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.425 [2024-12-09 13:13:46.539110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.425 [2024-12-09 13:13:46.539127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.425 [2024-12-09 13:13:46.539244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:44.425 [2024-12-09 13:13:46.539261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.726 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:44.726 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.726 #4 NEW cov: 12108 ft: 12109 corp: 2/9b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:44.726 [2024-12-09 13:13:46.879564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.879620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.879751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.879771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.879898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.879919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.880053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.880073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.726 #5 NEW cov: 12221 ft: 12738 corp: 3/17b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CrossOver- 00:06:44.726 [2024-12-09 13:13:46.949671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.949701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.949827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.949844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.949964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.949980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.726 [2024-12-09 13:13:46.950098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.726 [2024-12-09 13:13:46.950116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.008 #6 NEW cov: 12227 ft: 12968 corp: 4/26b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:06:45.008 [2024-12-09 13:13:47.019383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.019411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.019527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.019544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.008 #7 NEW cov: 12312 ft: 13599 corp: 5/30b lim: 10 exec/s: 0 rss: 72Mb L: 4/9 MS: 1 CrossOver- 00:06:45.008 [2024-12-09 13:13:47.069432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.069458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.008 #8 NEW cov: 12312 ft: 13890 corp: 6/33b lim: 10 exec/s: 0 rss: 72Mb L: 3/9 MS: 1 EraseBytes- 00:06:45.008 [2024-12-09 13:13:47.140292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff7f cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.140319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.140456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.140474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.140591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.140607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.140726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.140741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.008 #9 NEW cov: 12312 ft: 14052 corp: 7/41b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeBit- 00:06:45.008 [2024-12-09 13:13:47.190377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff7f cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.190406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.190528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.190543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.190664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.190682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.008 [2024-12-09 13:13:47.190797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fbff cdw11:00000000 00:06:45.008 [2024-12-09 13:13:47.190812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.008 #10 NEW cov: 12312 ft: 14127 corp: 8/49b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeBit- 00:06:45.279 [2024-12-09 13:13:47.260573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.279 [2024-12-09 13:13:47.260604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.279 [2024-12-09 13:13:47.260743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.260759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.260877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007fff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.260893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.261025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.261042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.280 #11 NEW cov: 12312 ft: 14190 corp: 9/57b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeByte- 00:06:45.280 [2024-12-09 13:13:47.310737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.310762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.310893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.310910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.311033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.311049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.311177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.311194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.280 #12 NEW cov: 12312 ft: 14219 corp: 10/66b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:45.280 [2024-12-09 13:13:47.380942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.380967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.381077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.381092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.381204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.381221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.381342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff4d cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.381358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.280 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.280 #13 NEW cov: 12335 ft: 14249 corp: 11/74b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeByte- 00:06:45.280 [2024-12-09 13:13:47.430483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ac0 cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.430509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.280 #14 NEW cov: 12335 ft: 14270 corp: 12/76b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 InsertByte- 00:06:45.280 [2024-12-09 13:13:47.481184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.481210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.481340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.481355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.481476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.481494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.280 [2024-12-09 13:13:47.481609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.280 [2024-12-09 13:13:47.481636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.280 #15 NEW cov: 12335 ft: 14302 corp: 13/85b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:45.567 [2024-12-09 13:13:47.531428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.531457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.531582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.531602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.531725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.531742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.531870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.531885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.567 #16 NEW cov: 12335 ft: 14317 corp: 14/94b lim: 10 exec/s: 16 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:06:45.567 [2024-12-09 13:13:47.601619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002dff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.601644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.601762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007fff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.601777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.601894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.601909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.602018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fffb cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.602035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.567 #17 NEW cov: 12335 ft: 14349 corp: 15/103b lim: 10 exec/s: 17 rss: 73Mb L: 9/9 MS: 1 InsertByte- 00:06:45.567 [2024-12-09 13:13:47.671864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000700 cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.671889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.671998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.672014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.672129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.672145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.672261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff4d cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.672277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.567 #18 NEW cov: 12335 ft: 14413 corp: 16/111b lim: 10 exec/s: 18 rss: 73Mb L: 8/9 MS: 1 ChangeBinInt- 00:06:45.567 [2024-12-09 13:13:47.741453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.741477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.567 #19 NEW cov: 12335 ft: 14448 corp: 17/113b lim: 10 exec/s: 19 rss: 73Mb L: 2/9 MS: 1 CopyPart- 00:06:45.567 [2024-12-09 13:13:47.792154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.792179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.792289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.567 [2024-12-09 13:13:47.792308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.567 [2024-12-09 13:13:47.792427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.568 [2024-12-09 13:13:47.792444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.568 [2024-12-09 13:13:47.792556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.568 [2024-12-09 13:13:47.792571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.861 #20 NEW cov: 12335 ft: 14457 corp: 18/122b lim: 10 exec/s: 20 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:45.861 [2024-12-09 13:13:47.862392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.862419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.862551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.862565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.862680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.862696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.862821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007eff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.862837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.861 #21 NEW cov: 12335 ft: 14471 corp: 19/131b lim: 10 exec/s: 21 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:06:45.861 [2024-12-09 13:13:47.932618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff07 cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.932642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.932767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.932784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.932904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.932921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.933042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff4d cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.933059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.861 #22 NEW cov: 12335 ft: 14497 corp: 20/139b lim: 10 exec/s: 22 rss: 73Mb L: 8/9 MS: 1 ChangeBinInt- 00:06:45.861 [2024-12-09 13:13:47.982336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.982361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:47.982489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000fa cdw11:00000000 00:06:45.861 [2024-12-09 13:13:47.982506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.861 #23 NEW cov: 12335 ft: 14505 corp: 21/143b lim: 10 exec/s: 23 rss: 73Mb L: 4/9 MS: 1 ChangeBinInt- 00:06:45.861 [2024-12-09 13:13:48.032940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:48.032964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:48.033098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:48.033115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:48.033231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:48.033247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.861 [2024-12-09 13:13:48.033374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.861 [2024-12-09 13:13:48.033391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.861 #24 NEW cov: 12335 ft: 14535 corp: 22/152b lim: 10 exec/s: 24 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:46.142 [2024-12-09 13:13:48.102761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.102789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.102924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b201 cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.102940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.142 #26 NEW cov: 12335 ft: 14552 corp: 23/156b lim: 10 exec/s: 26 rss: 73Mb L: 4/9 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:46.142 [2024-12-09 13:13:48.143279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff07 cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.143304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.143441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.143458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.143581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.143600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.143726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005dff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.143741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.142 #27 NEW cov: 12335 ft: 14581 corp: 24/165b lim: 10 exec/s: 27 rss: 73Mb L: 9/9 MS: 1 InsertByte- 00:06:46.142 [2024-12-09 13:13:48.213496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.213524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.213642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000700 cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.213659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.213774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.213792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.213914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff5d cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.213931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.142 #29 NEW cov: 12335 ft: 14608 corp: 25/173b lim: 10 exec/s: 29 rss: 73Mb L: 8/9 MS: 2 EraseBytes-CrossOver- 00:06:46.142 [2024-12-09 13:13:48.283462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.283486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.142 [2024-12-09 13:13:48.283628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000700 cdw11:00000000 00:06:46.142 [2024-12-09 13:13:48.283644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.143 [2024-12-09 13:13:48.283764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.143 [2024-12-09 13:13:48.283780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.143 #30 NEW cov: 12335 ft: 14744 corp: 26/180b lim: 10 exec/s: 30 rss: 73Mb L: 7/9 MS: 1 EraseBytes- 00:06:46.143 [2024-12-09 13:13:48.353904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.143 [2024-12-09 13:13:48.353929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.143 [2024-12-09 13:13:48.354046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.143 [2024-12-09 13:13:48.354061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.143 [2024-12-09 13:13:48.354183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff24 cdw11:00000000 00:06:46.143 [2024-12-09 13:13:48.354199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.143 [2024-12-09 13:13:48.354318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.143 [2024-12-09 13:13:48.354335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.143 #31 NEW cov: 12335 ft: 14750 corp: 27/189b lim: 10 exec/s: 31 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:06:46.415 [2024-12-09 13:13:48.404024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff7f cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.404050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.404172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.404188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.404304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000026ff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.404320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.404452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fffb cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.404469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.415 #32 NEW cov: 12335 ft: 14782 corp: 28/198b lim: 10 exec/s: 32 rss: 73Mb L: 9/9 MS: 1 InsertByte- 00:06:46.415 [2024-12-09 13:13:48.453594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003fc0 cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.453622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.415 #33 NEW cov: 12335 ft: 14791 corp: 29/200b lim: 10 exec/s: 33 rss: 73Mb L: 2/9 MS: 1 ChangeByte- 00:06:46.415 [2024-12-09 13:13:48.504628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002dff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.504653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.504776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007fff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.504792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.504912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.504927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.505043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.505059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.415 [2024-12-09 13:13:48.505185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000fbff cdw11:00000000 00:06:46.415 [2024-12-09 13:13:48.505201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.415 #34 NEW cov: 12335 ft: 14842 corp: 30/210b lim: 10 exec/s: 17 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:46.415 #34 DONE cov: 12335 ft: 14842 corp: 30/210b lim: 10 exec/s: 17 rss: 74Mb 00:06:46.415 Done 34 runs in 2 second(s) 00:06:46.415 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.415 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.415 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.696 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:46.696 [2024-12-09 13:13:48.697071] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:46.696 [2024-12-09 13:13:48.697154] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297560 ] 00:06:46.696 [2024-12-09 13:13:48.897986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.045 [2024-12-09 13:13:48.936555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.046 [2024-12-09 13:13:48.995837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.046 [2024-12-09 13:13:49.012133] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:47.046 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.046 INFO: Seed: 11786014 00:06:47.046 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:47.046 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:47.046 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:47.046 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.046 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.046 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.046 This may also happen if the target rejected all inputs we tried so far 00:06:47.046 [2024-12-09 13:13:49.071109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:47.046 [2024-12-09 13:13:49.071139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.313 NEW_FUNC[1/715]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:47.313 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.313 #3 NEW cov: 12108 ft: 12095 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:47.313 [2024-12-09 13:13:49.412258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e8a cdw11:00000000 00:06:47.313 [2024-12-09 13:13:49.412315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.313 #5 NEW cov: 12221 ft: 12840 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 ChangeBit-InsertByte- 00:06:47.313 [2024-12-09 13:13:49.462298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e7e cdw11:00000000 00:06:47.313 [2024-12-09 13:13:49.462325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.313 [2024-12-09 13:13:49.462379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008a8a cdw11:00000000 00:06:47.313 [2024-12-09 13:13:49.462393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.313 #6 NEW cov: 12227 ft: 13230 corp: 4/9b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:06:47.313 [2024-12-09 13:13:49.522379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005e8a cdw11:00000000 00:06:47.313 [2024-12-09 13:13:49.522404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.313 #7 NEW cov: 12312 ft: 13644 corp: 5/11b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeBit- 00:06:47.595 [2024-12-09 13:13:49.562467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:06:47.595 [2024-12-09 13:13:49.562495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.595 #8 NEW cov: 12312 ft: 13814 corp: 6/13b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:47.595 [2024-12-09 13:13:49.622847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:06:47.595 [2024-12-09 13:13:49.622872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.595 [2024-12-09 13:13:49.622926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.595 [2024-12-09 13:13:49.622939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.595 [2024-12-09 13:13:49.622992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.595 [2024-12-09 13:13:49.623006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.595 #9 NEW cov: 12312 ft: 14033 corp: 7/20b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:47.596 [2024-12-09 13:13:49.683012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.683038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.596 [2024-12-09 13:13:49.683094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.683108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.596 [2024-12-09 13:13:49.683161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.683175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.596 #10 NEW cov: 12312 ft: 14082 corp: 8/27b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeBinInt- 00:06:47.596 [2024-12-09 13:13:49.742911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003e0a cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.742937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.596 #11 NEW cov: 12312 ft: 14159 corp: 9/29b lim: 10 exec/s: 0 rss: 72Mb L: 2/7 MS: 1 ChangeBit- 00:06:47.596 [2024-12-09 13:13:49.783270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000026ff cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.783295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.596 [2024-12-09 13:13:49.783352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.783366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.596 [2024-12-09 13:13:49.783421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.596 [2024-12-09 13:13:49.783435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.596 #12 NEW cov: 12312 ft: 14220 corp: 10/36b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 ChangeBit- 00:06:47.896 [2024-12-09 13:13:49.843248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e02 cdw11:00000000 00:06:47.896 [2024-12-09 13:13:49.843279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #13 NEW cov: 12312 ft: 14256 corp: 11/38b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 ChangeBinInt- 00:06:47.896 [2024-12-09 13:13:49.883369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002efb cdw11:00000000 00:06:47.896 [2024-12-09 13:13:49.883395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #14 NEW cov: 12312 ft: 14348 corp: 12/40b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 ChangeByte- 00:06:47.896 [2024-12-09 13:13:49.943519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:06:47.896 [2024-12-09 13:13:49.943545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:47.896 #16 NEW cov: 12335 ft: 14413 corp: 13/42b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 2 EraseBytes-CopyPart- 00:06:47.896 [2024-12-09 13:13:50.003680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a8a cdw11:00000000 00:06:47.896 [2024-12-09 13:13:50.003709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #17 NEW cov: 12335 ft: 14517 corp: 14/44b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 CrossOver- 00:06:47.896 [2024-12-09 13:13:50.063903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e20a cdw11:00000000 00:06:47.896 [2024-12-09 13:13:50.063930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #18 NEW cov: 12335 ft: 14543 corp: 15/46b lim: 10 exec/s: 18 rss: 73Mb L: 2/7 MS: 1 ChangeByte- 00:06:47.896 [2024-12-09 13:13:50.104136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002efb cdw11:00000000 00:06:47.896 [2024-12-09 13:13:50.104164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 [2024-12-09 13:13:50.104219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002efb cdw11:00000000 00:06:47.896 [2024-12-09 13:13:50.104234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.223 #19 NEW cov: 12335 ft: 14592 corp: 16/50b lim: 10 exec/s: 19 rss: 73Mb L: 4/7 MS: 1 CopyPart- 00:06:48.223 [2024-12-09 13:13:50.164200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000268a cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.164227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 #20 NEW cov: 12335 ft: 14669 corp: 17/52b lim: 10 exec/s: 20 rss: 73Mb L: 2/7 MS: 1 ChangeByte- 00:06:48.223 [2024-12-09 13:13:50.204249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.204275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 #21 NEW cov: 12335 ft: 14691 corp: 18/54b lim: 10 exec/s: 21 rss: 73Mb L: 2/7 MS: 1 ChangeByte- 00:06:48.223 [2024-12-09 13:13:50.264847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.264875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.264946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.264964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.265021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.265035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.265091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.265105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.223 #22 NEW cov: 12335 ft: 14938 corp: 19/63b lim: 10 exec/s: 22 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:48.223 [2024-12-09 13:13:50.304564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000023e cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.304601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 #26 NEW cov: 12335 ft: 14974 corp: 20/66b lim: 10 exec/s: 26 rss: 73Mb L: 3/9 MS: 4 EraseBytes-CrossOver-ChangeBit-CrossOver- 00:06:48.223 [2024-12-09 13:13:50.364755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000268a cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.364780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 #27 NEW cov: 12335 ft: 14980 corp: 21/69b lim: 10 exec/s: 27 rss: 73Mb L: 3/9 MS: 1 InsertByte- 00:06:48.223 [2024-12-09 13:13:50.405208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.405233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.405287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.405301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.405358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.405372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.223 [2024-12-09 13:13:50.405426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a2e cdw11:00000000 00:06:48.223 [2024-12-09 13:13:50.405440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.223 #28 NEW cov: 12335 ft: 14982 corp: 22/78b lim: 10 exec/s: 28 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:06:48.502 [2024-12-09 13:13:50.444956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e2a cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.444982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 #29 NEW cov: 12335 ft: 14990 corp: 23/80b lim: 10 exec/s: 29 rss: 73Mb L: 2/9 MS: 1 ChangeBit- 00:06:48.502 [2024-12-09 13:13:50.485217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e8a cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.485243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.485298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e20a cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.485312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.502 #30 NEW cov: 12335 ft: 15024 corp: 24/84b lim: 10 exec/s: 30 rss: 73Mb L: 4/9 MS: 1 CrossOver- 00:06:48.502 [2024-12-09 13:13:50.525312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002efb cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.525338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.525396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002efb cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.525410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.502 #31 NEW cov: 12335 ft: 15032 corp: 25/88b lim: 10 exec/s: 31 rss: 73Mb L: 4/9 MS: 1 ShuffleBytes- 00:06:48.502 [2024-12-09 13:13:50.585633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004949 cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.585658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.585715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004949 cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.585729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.585784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002efb cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.585797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.502 #32 NEW cov: 12335 ft: 15050 corp: 26/94b lim: 10 exec/s: 32 rss: 73Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:06:48.502 [2024-12-09 13:13:50.625744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.625770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.625828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.625842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.625896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.625910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.502 #33 NEW cov: 12335 ft: 15062 corp: 27/101b lim: 10 exec/s: 33 rss: 73Mb L: 7/9 MS: 1 CopyPart- 00:06:48.502 [2024-12-09 13:13:50.665718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e7e cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.665754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.502 [2024-12-09 13:13:50.665826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008a8a cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.665841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.502 #34 NEW cov: 12335 ft: 15080 corp: 28/106b lim: 10 exec/s: 34 rss: 74Mb L: 5/9 MS: 1 InsertByte- 00:06:48.502 [2024-12-09 13:13:50.725780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c2ee cdw11:00000000 00:06:48.502 [2024-12-09 13:13:50.725805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 #35 NEW cov: 12335 ft: 15090 corp: 29/108b lim: 10 exec/s: 35 rss: 74Mb L: 2/9 MS: 1 ChangeBinInt- 00:06:48.764 [2024-12-09 13:13:50.785930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.785958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 #37 NEW cov: 12335 ft: 15121 corp: 30/110b lim: 10 exec/s: 37 rss: 74Mb L: 2/9 MS: 2 EraseBytes-CopyPart- 00:06:48.764 [2024-12-09 13:13:50.825981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003e0a cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.826005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 #38 NEW cov: 12335 ft: 15128 corp: 31/113b lim: 10 exec/s: 38 rss: 74Mb L: 3/9 MS: 1 InsertByte- 00:06:48.764 [2024-12-09 13:13:50.866087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.866112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 #39 NEW cov: 12335 ft: 15152 corp: 32/116b lim: 10 exec/s: 39 rss: 74Mb L: 3/9 MS: 1 ChangeBinInt- 00:06:48.764 [2024-12-09 13:13:50.926326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.926351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 #40 NEW cov: 12335 ft: 15182 corp: 33/119b lim: 10 exec/s: 40 rss: 74Mb L: 3/9 MS: 1 ShuffleBytes- 00:06:48.764 [2024-12-09 13:13:50.986706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002efb cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.986731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.764 [2024-12-09 13:13:50.986805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007171 cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.986819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.764 [2024-12-09 13:13:50.986874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000712e cdw11:00000000 00:06:48.764 [2024-12-09 13:13:50.986887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.025 #41 NEW cov: 12335 ft: 15211 corp: 34/126b lim: 10 exec/s: 41 rss: 74Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:06:49.025 [2024-12-09 13:13:51.026573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e02 cdw11:00000000 00:06:49.025 [2024-12-09 13:13:51.026602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.025 #42 NEW cov: 12335 ft: 15257 corp: 35/128b lim: 10 exec/s: 42 rss: 74Mb L: 2/9 MS: 1 CrossOver- 00:06:49.025 [2024-12-09 13:13:51.066627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e8a cdw11:00000000 00:06:49.025 [2024-12-09 13:13:51.066653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.025 #43 NEW cov: 12335 ft: 15327 corp: 36/131b lim: 10 exec/s: 21 rss: 74Mb L: 3/9 MS: 1 EraseBytes- 00:06:49.025 #43 DONE cov: 12335 ft: 15327 corp: 36/131b lim: 10 exec/s: 21 rss: 74Mb 00:06:49.025 Done 43 runs in 2 second(s) 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.025 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:49.025 [2024-12-09 13:13:51.258022] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:49.025 [2024-12-09 13:13:51.258100] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298002 ] 00:06:49.286 [2024-12-09 13:13:51.460262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.286 [2024-12-09 13:13:51.493789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.546 [2024-12-09 13:13:51.552903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.546 [2024-12-09 13:13:51.569211] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:49.546 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.547 INFO: Seed: 2568809339 00:06:49.547 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:49.547 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:49.547 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.547 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.547 [2024-12-09 13:13:51.639011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.639048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.547 #2 INITED cov: 12135 ft: 12129 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:49.547 [2024-12-09 13:13:51.690272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.690303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.547 [2024-12-09 13:13:51.690438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.690455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.547 [2024-12-09 13:13:51.690592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.690612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.547 [2024-12-09 13:13:51.690743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.690761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.547 [2024-12-09 13:13:51.690894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.690913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.547 #3 NEW cov: 12248 ft: 13602 corp: 2/6b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:49.547 [2024-12-09 13:13:51.759380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.547 [2024-12-09 13:13:51.759409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.547 #4 NEW cov: 12254 ft: 13803 corp: 3/7b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:49.808 [2024-12-09 13:13:51.809926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.809956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.810088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.810106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.808 #5 NEW cov: 12339 ft: 14275 corp: 4/9b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:06:49.808 [2024-12-09 13:13:51.880778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.880805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.880939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.880955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.881086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.881104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.881243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.881261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.881394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.881412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.808 #6 NEW cov: 12339 ft: 14336 corp: 5/14b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:49.808 [2024-12-09 13:13:51.930161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.930191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:51.930331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:51.930349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.808 #7 NEW cov: 12339 ft: 14519 corp: 6/16b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:49.808 [2024-12-09 13:13:52.000923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.000950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:52.001082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.001099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:52.001224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.001240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:52.001372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.001389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.808 #8 NEW cov: 12339 ft: 14555 corp: 7/20b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:49.808 [2024-12-09 13:13:52.050555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.050589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.808 [2024-12-09 13:13:52.050721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.808 [2024-12-09 13:13:52.050737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.070 #9 NEW cov: 12339 ft: 14577 corp: 8/22b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:06:50.070 [2024-12-09 13:13:52.101229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.101256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.101397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.101414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.101549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.101567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.101715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.101734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.070 #10 NEW cov: 12339 ft: 14606 corp: 9/26b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 CrossOver- 00:06:50.070 [2024-12-09 13:13:52.170895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.170923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.171052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.171069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.070 #11 NEW cov: 12339 ft: 14669 corp: 10/28b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBit- 00:06:50.070 [2024-12-09 13:13:52.241359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.241389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.241532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.241551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.241707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.241726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.070 #12 NEW cov: 12339 ft: 14845 corp: 11/31b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 EraseBytes- 00:06:50.070 [2024-12-09 13:13:52.312258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.312286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.312436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.312454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.312597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.312613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.312752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.312767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.070 [2024-12-09 13:13:52.312906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.070 [2024-12-09 13:13:52.312927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.331 #13 NEW cov: 12339 ft: 14870 corp: 12/36b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:06:50.331 [2024-12-09 13:13:52.382246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.382273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.331 [2024-12-09 13:13:52.382404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.382420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.331 [2024-12-09 13:13:52.382550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.382566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.331 [2024-12-09 13:13:52.382712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.382728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.331 [2024-12-09 13:13:52.382860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.382877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.331 #14 NEW cov: 12339 ft: 14924 corp: 13/41b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:50.331 [2024-12-09 13:13:52.451667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.451694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.331 [2024-12-09 13:13:52.451830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.451850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.331 #15 NEW cov: 12339 ft: 14944 corp: 14/43b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:06:50.331 [2024-12-09 13:13:52.521516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.331 [2024-12-09 13:13:52.521543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.592 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.592 #16 NEW cov: 12362 ft: 14999 corp: 15/44b lim: 5 exec/s: 16 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:50.592 [2024-12-09 13:13:52.832878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.592 [2024-12-09 13:13:52.832926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.592 [2024-12-09 13:13:52.833068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.592 [2024-12-09 13:13:52.833096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.853 #17 NEW cov: 12362 ft: 15131 corp: 16/46b lim: 5 exec/s: 17 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:06:50.853 [2024-12-09 13:13:52.873195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.873222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:52.873353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.873370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:52.873495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.873511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:52.873650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.873667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.853 #18 NEW cov: 12362 ft: 15191 corp: 17/50b lim: 5 exec/s: 18 rss: 73Mb L: 4/5 MS: 1 InsertByte- 00:06:50.853 [2024-12-09 13:13:52.933104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.933130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:52.933258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.933274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:52.933394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:52.933411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.853 #19 NEW cov: 12362 ft: 15226 corp: 18/53b lim: 5 exec/s: 19 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:50.853 [2024-12-09 13:13:53.003275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:53.003302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:53.003420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:53.003437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:53.003568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:53.003584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.853 #20 NEW cov: 12362 ft: 15250 corp: 19/56b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:50.853 [2024-12-09 13:13:53.063244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:53.063271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.853 [2024-12-09 13:13:53.063404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.853 [2024-12-09 13:13:53.063420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.853 #21 NEW cov: 12362 ft: 15265 corp: 20/58b lim: 5 exec/s: 21 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:51.115 [2024-12-09 13:13:53.114126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.115 [2024-12-09 13:13:53.114154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.115 [2024-12-09 13:13:53.114287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.115 [2024-12-09 13:13:53.114303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.115 [2024-12-09 13:13:53.114430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.115 [2024-12-09 13:13:53.114449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.114584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.114604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.114734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.114750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.116 #22 NEW cov: 12362 ft: 15300 corp: 21/63b lim: 5 exec/s: 22 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:06:51.116 [2024-12-09 13:13:53.184113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.184141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.184276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.184292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.184418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.184434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.184567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.184584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.116 #23 NEW cov: 12362 ft: 15329 corp: 22/67b lim: 5 exec/s: 23 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:51.116 [2024-12-09 13:13:53.233780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.233807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.233938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.233954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.116 #24 NEW cov: 12362 ft: 15344 corp: 23/69b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:06:51.116 [2024-12-09 13:13:53.284734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.284762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.284895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.284913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.285047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.285066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.285198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.285214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.285350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.285368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.116 #25 NEW cov: 12362 ft: 15348 corp: 24/74b lim: 5 exec/s: 25 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:51.116 [2024-12-09 13:13:53.333984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.334014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.116 [2024-12-09 13:13:53.334143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.116 [2024-12-09 13:13:53.334162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.377 #26 NEW cov: 12362 ft: 15355 corp: 25/76b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:51.377 [2024-12-09 13:13:53.404227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.404256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.404388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.404408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.377 #27 NEW cov: 12362 ft: 15386 corp: 26/78b lim: 5 exec/s: 27 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:06:51.377 [2024-12-09 13:13:53.474199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.474229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.377 #28 NEW cov: 12362 ft: 15391 corp: 27/79b lim: 5 exec/s: 28 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:51.377 [2024-12-09 13:13:53.525375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.525403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.525537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.525554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.525682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.525698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.525834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.525850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.525984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.526000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.377 #29 NEW cov: 12362 ft: 15407 corp: 28/84b lim: 5 exec/s: 29 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:51.377 [2024-12-09 13:13:53.595076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.595103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.595231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.595249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.377 [2024-12-09 13:13:53.595379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.377 [2024-12-09 13:13:53.595397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.377 #30 NEW cov: 12362 ft: 15459 corp: 29/87b lim: 5 exec/s: 15 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:06:51.377 #30 DONE cov: 12362 ft: 15459 corp: 29/87b lim: 5 exec/s: 15 rss: 74Mb 00:06:51.377 Done 30 runs in 2 second(s) 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.638 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:51.639 [2024-12-09 13:13:53.770428] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:51.639 [2024-12-09 13:13:53.770516] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298537 ] 00:06:51.899 [2024-12-09 13:13:53.969372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.899 [2024-12-09 13:13:54.002257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.899 [2024-12-09 13:13:54.061191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.899 [2024-12-09 13:13:54.077453] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:51.899 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.899 INFO: Seed: 783839121 00:06:51.899 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:51.899 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:51.899 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:51.899 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.159 [2024-12-09 13:13:54.153743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.159 [2024-12-09 13:13:54.153777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.159 #2 INITED cov: 12099 ft: 12118 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:52.159 [2024-12-09 13:13:54.204062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.159 [2024-12-09 13:13:54.204088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.159 [2024-12-09 13:13:54.204227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.159 [2024-12-09 13:13:54.204247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.159 #3 NEW cov: 12247 ft: 13477 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:06:52.159 [2024-12-09 13:13:54.274118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.159 [2024-12-09 13:13:54.274145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.159 #4 NEW cov: 12253 ft: 13621 corp: 3/4b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 EraseBytes- 00:06:52.159 [2024-12-09 13:13:54.344193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.159 [2024-12-09 13:13:54.344220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.159 #5 NEW cov: 12338 ft: 13900 corp: 4/5b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 CopyPart- 00:06:52.418 [2024-12-09 13:13:54.414450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.414479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.418 #6 NEW cov: 12338 ft: 13968 corp: 5/6b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:52.418 [2024-12-09 13:13:54.464619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.464646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.418 #7 NEW cov: 12338 ft: 14008 corp: 6/7b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeByte- 00:06:52.418 [2024-12-09 13:13:54.515283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.515311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.418 [2024-12-09 13:13:54.515442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.515459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.418 [2024-12-09 13:13:54.515591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.515608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.418 #8 NEW cov: 12338 ft: 14266 corp: 7/10b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 InsertByte- 00:06:52.418 [2024-12-09 13:13:54.564888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.564915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.418 #9 NEW cov: 12338 ft: 14375 corp: 8/11b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ShuffleBytes- 00:06:52.418 [2024-12-09 13:13:54.635098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.418 [2024-12-09 13:13:54.635126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.678 #10 NEW cov: 12338 ft: 14432 corp: 9/12b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeByte- 00:06:52.678 [2024-12-09 13:13:54.705739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.678 [2024-12-09 13:13:54.705767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.678 [2024-12-09 13:13:54.705901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.678 [2024-12-09 13:13:54.705917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.678 #11 NEW cov: 12338 ft: 14469 corp: 10/14b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 InsertByte- 00:06:52.678 [2024-12-09 13:13:54.755523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.678 [2024-12-09 13:13:54.755553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.679 #12 NEW cov: 12338 ft: 14498 corp: 11/15b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 CopyPart- 00:06:52.679 [2024-12-09 13:13:54.805703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.679 [2024-12-09 13:13:54.805729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.679 #13 NEW cov: 12338 ft: 14521 corp: 12/16b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ShuffleBytes- 00:06:52.679 [2024-12-09 13:13:54.875964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.679 [2024-12-09 13:13:54.875992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.679 #14 NEW cov: 12338 ft: 14573 corp: 13/17b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeBit- 00:06:52.939 [2024-12-09 13:13:54.946204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.939 [2024-12-09 13:13:54.946231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.939 #15 NEW cov: 12338 ft: 14593 corp: 14/18b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeByte- 00:06:52.939 [2024-12-09 13:13:54.996327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.939 [2024-12-09 13:13:54.996356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.199 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:53.199 #16 NEW cov: 12361 ft: 14628 corp: 15/19b lim: 5 exec/s: 16 rss: 73Mb L: 1/3 MS: 1 ChangeBit- 00:06:53.199 [2024-12-09 13:13:55.337632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.337679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.199 [2024-12-09 13:13:55.337823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.337847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.199 #17 NEW cov: 12361 ft: 14724 corp: 16/21b lim: 5 exec/s: 17 rss: 73Mb L: 2/3 MS: 1 InsertByte- 00:06:53.199 [2024-12-09 13:13:55.398363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.398395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.199 [2024-12-09 13:13:55.398520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.398537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.199 [2024-12-09 13:13:55.398668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.398686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.199 [2024-12-09 13:13:55.398821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.398836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.199 [2024-12-09 13:13:55.398958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.199 [2024-12-09 13:13:55.398976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.199 #18 NEW cov: 12361 ft: 15151 corp: 17/26b lim: 5 exec/s: 18 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:53.460 [2024-12-09 13:13:55.467546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.460 [2024-12-09 13:13:55.467574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.460 #19 NEW cov: 12361 ft: 15241 corp: 18/27b lim: 5 exec/s: 19 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:53.460 [2024-12-09 13:13:55.517609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.460 [2024-12-09 13:13:55.517635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.460 #20 NEW cov: 12361 ft: 15260 corp: 19/28b lim: 5 exec/s: 20 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:53.460 [2024-12-09 13:13:55.587988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.460 [2024-12-09 13:13:55.588016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.460 #21 NEW cov: 12361 ft: 15338 corp: 20/29b lim: 5 exec/s: 21 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:53.460 [2024-12-09 13:13:55.658164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.460 [2024-12-09 13:13:55.658193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.460 #22 NEW cov: 12361 ft: 15388 corp: 21/30b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:53.721 [2024-12-09 13:13:55.708517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.708546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.708689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.708710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.721 #23 NEW cov: 12361 ft: 15395 corp: 22/32b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:53.721 [2024-12-09 13:13:55.758988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.759015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.759147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.759163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.759283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.759300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.721 #24 NEW cov: 12361 ft: 15424 corp: 23/35b lim: 5 exec/s: 24 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:06:53.721 [2024-12-09 13:13:55.828547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.828573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.721 #25 NEW cov: 12361 ft: 15435 corp: 24/36b lim: 5 exec/s: 25 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:53.721 [2024-12-09 13:13:55.879147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.879174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.879297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.879322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.879452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.879469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.721 #26 NEW cov: 12361 ft: 15524 corp: 25/39b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:06:53.721 [2024-12-09 13:13:55.929345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.929371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.929509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.929527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.721 [2024-12-09 13:13:55.929655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.721 [2024-12-09 13:13:55.929672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.982 #27 NEW cov: 12361 ft: 15544 corp: 26/42b lim: 5 exec/s: 27 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:53.982 [2024-12-09 13:13:55.999275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:55.999303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:55.999429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:55.999447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 #28 NEW cov: 12361 ft: 15558 corp: 27/44b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:53.982 [2024-12-09 13:13:56.049424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.049450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:56.049580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.049602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 #29 NEW cov: 12361 ft: 15576 corp: 28/46b lim: 5 exec/s: 29 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:53.982 [2024-12-09 13:13:56.120426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.120452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:56.120575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.120595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:56.120734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.120750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:56.120872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.120888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.982 [2024-12-09 13:13:56.121011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-12-09 13:13:56.121029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.982 #30 NEW cov: 12361 ft: 15582 corp: 29/51b lim: 5 exec/s: 15 rss: 74Mb L: 5/5 MS: 1 CMP- DE: "\377\377\000t"- 00:06:53.982 #30 DONE cov: 12361 ft: 15582 corp: 29/51b lim: 5 exec/s: 15 rss: 74Mb 00:06:53.982 ###### Recommended dictionary. ###### 00:06:53.982 "\377\377\000t" # Uses: 0 00:06:53.982 ###### End of recommended dictionary. ###### 00:06:53.982 Done 30 runs in 2 second(s) 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.243 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.244 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:54.244 [2024-12-09 13:13:56.297700] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:54.244 [2024-12-09 13:13:56.297783] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298850 ] 00:06:54.505 [2024-12-09 13:13:56.504079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.505 [2024-12-09 13:13:56.539023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.505 [2024-12-09 13:13:56.598125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.505 [2024-12-09 13:13:56.614429] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:54.505 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.505 INFO: Seed: 3320846616 00:06:54.505 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:54.505 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:54.505 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:54.505 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.505 #2 INITED exec/s: 0 rss: 65Mb 00:06:54.505 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.505 This may also happen if the target rejected all inputs we tried so far 00:06:54.505 [2024-12-09 13:13:56.691024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.505 [2024-12-09 13:13:56.691062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.505 [2024-12-09 13:13:56.691208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.505 [2024-12-09 13:13:56.691226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.765 NEW_FUNC[1/716]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:54.765 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.765 #3 NEW cov: 12140 ft: 12141 corp: 2/23b lim: 40 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:55.026 [2024-12-09 13:13:57.031604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.026 [2024-12-09 13:13:57.031653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.026 #12 NEW cov: 12270 ft: 13042 corp: 3/32b lim: 40 exec/s: 0 rss: 72Mb L: 9/22 MS: 4 CopyPart-InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:55.026 [2024-12-09 13:13:57.081564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0942b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.026 [2024-12-09 13:13:57.081596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.026 #13 NEW cov: 12276 ft: 13311 corp: 4/40b lim: 40 exec/s: 0 rss: 72Mb L: 8/22 MS: 1 EraseBytes- 00:06:55.026 [2024-12-09 13:13:57.151914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae0171f cdw11:1f1fe0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.026 [2024-12-09 13:13:57.151943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.026 [2024-12-09 13:13:57.152084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.026 [2024-12-09 13:13:57.152101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.026 #14 NEW cov: 12361 ft: 13506 corp: 5/62b lim: 40 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 ChangeBinInt- 00:06:55.026 [2024-12-09 13:13:57.221914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a00a2a01 cdw11:000a328e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.026 [2024-12-09 13:13:57.221941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.026 #18 NEW cov: 12361 ft: 13594 corp: 6/73b lim: 40 exec/s: 0 rss: 72Mb L: 11/22 MS: 4 InsertByte-CrossOver-ShuffleBytes-CMP- DE: "\001\000\0122\216\231\007d"- 00:06:55.286 [2024-12-09 13:13:57.272096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a098a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.286 [2024-12-09 13:13:57.272122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.286 #19 NEW cov: 12361 ft: 13731 corp: 7/83b lim: 40 exec/s: 0 rss: 72Mb L: 10/22 MS: 1 InsertByte- 00:06:55.287 [2024-12-09 13:13:57.322242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0942b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.322269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.287 #20 NEW cov: 12361 ft: 13795 corp: 8/91b lim: 40 exec/s: 0 rss: 72Mb L: 8/22 MS: 1 ShuffleBytes- 00:06:55.287 [2024-12-09 13:13:57.382420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a01000a cdw11:328e9907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.382446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.287 #22 NEW cov: 12361 ft: 13911 corp: 9/100b lim: 40 exec/s: 0 rss: 72Mb L: 9/22 MS: 2 CopyPart-PersAutoDict- DE: "\001\000\0122\216\231\007d"- 00:06:55.287 [2024-12-09 13:13:57.433009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a01000a cdw11:328e9900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.433035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.287 [2024-12-09 13:13:57.433175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.433191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.287 [2024-12-09 13:13:57.433334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.433351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.287 #23 NEW cov: 12361 ft: 14161 corp: 10/129b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:55.287 [2024-12-09 13:13:57.502781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0943b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.287 [2024-12-09 13:13:57.502808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.547 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:55.547 #24 NEW cov: 12384 ft: 14222 corp: 11/137b lim: 40 exec/s: 0 rss: 73Mb L: 8/29 MS: 1 ChangeByte- 00:06:55.547 [2024-12-09 13:13:57.572967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a04f2a01 cdw11:000a328e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.547 [2024-12-09 13:13:57.572993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.547 #30 NEW cov: 12384 ft: 14279 corp: 12/148b lim: 40 exec/s: 0 rss: 73Mb L: 11/29 MS: 1 ChangeByte- 00:06:55.548 [2024-12-09 13:13:57.643131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000a32 cdw11:8e990764 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.548 [2024-12-09 13:13:57.643157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.548 #31 NEW cov: 12384 ft: 14303 corp: 13/158b lim: 40 exec/s: 31 rss: 73Mb L: 10/29 MS: 1 PersAutoDict- DE: "\001\000\0122\216\231\007d"- 00:06:55.548 [2024-12-09 13:13:57.703408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:550a2a01 cdw11:000a328e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.548 [2024-12-09 13:13:57.703435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.548 #32 NEW cov: 12384 ft: 14378 corp: 14/169b lim: 40 exec/s: 32 rss: 73Mb L: 11/29 MS: 1 ChangeByte- 00:06:55.548 [2024-12-09 13:13:57.753494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:550a2a01 cdw11:01000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.548 [2024-12-09 13:13:57.753523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.808 #33 NEW cov: 12384 ft: 14425 corp: 15/183b lim: 40 exec/s: 33 rss: 73Mb L: 14/29 MS: 1 CopyPart- 00:06:55.808 [2024-12-09 13:13:57.823669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:550a012a cdw11:000a0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.808 [2024-12-09 13:13:57.823697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.808 #34 NEW cov: 12384 ft: 14480 corp: 16/197b lim: 40 exec/s: 34 rss: 73Mb L: 14/29 MS: 1 ShuffleBytes- 00:06:55.808 [2024-12-09 13:13:57.893865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a4a0 cdw11:a0a0a094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.808 [2024-12-09 13:13:57.893896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.809 #35 NEW cov: 12384 ft: 14500 corp: 17/206b lim: 40 exec/s: 35 rss: 73Mb L: 9/29 MS: 1 ChangeBit- 00:06:55.809 [2024-12-09 13:13:57.944081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a480 cdw11:a0a0a094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.809 [2024-12-09 13:13:57.944110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.809 #36 NEW cov: 12384 ft: 14530 corp: 18/215b lim: 40 exec/s: 36 rss: 73Mb L: 9/29 MS: 1 ChangeBit- 00:06:55.809 [2024-12-09 13:13:58.014470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:550a550a cdw11:012a000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.809 [2024-12-09 13:13:58.014497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.809 [2024-12-09 13:13:58.014646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000a32 cdw11:8e990764 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.809 [2024-12-09 13:13:58.014664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.070 #37 NEW cov: 12384 ft: 14561 corp: 19/231b lim: 40 exec/s: 37 rss: 73Mb L: 16/29 MS: 1 CopyPart- 00:06:56.070 [2024-12-09 13:13:58.085312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a09f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.085341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.085479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.085497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.085637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.085654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.085787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.085804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.085941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:9f9f9f9f cdw11:9f9f942b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.085957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.070 #39 NEW cov: 12384 ft: 15083 corp: 20/271b lim: 40 exec/s: 39 rss: 73Mb L: 40/40 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:56.070 [2024-12-09 13:13:58.134642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:550ad6fe cdw11:fefff5ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.134668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.070 #40 NEW cov: 12384 ft: 15128 corp: 21/285b lim: 40 exec/s: 40 rss: 73Mb L: 14/40 MS: 1 ChangeBinInt- 00:06:56.070 [2024-12-09 13:13:58.185033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.185064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.185205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.185221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.070 #41 NEW cov: 12384 ft: 15171 corp: 22/307b lim: 40 exec/s: 41 rss: 73Mb L: 22/40 MS: 1 CopyPart- 00:06:56.070 [2024-12-09 13:13:58.235039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a00000 cdw11:0007a480 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.235066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.070 #42 NEW cov: 12384 ft: 15187 corp: 23/320b lim: 40 exec/s: 42 rss: 74Mb L: 13/40 MS: 1 CMP- DE: "\000\000\000\007"- 00:06:56.070 [2024-12-09 13:13:58.305454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a4a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.305479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.070 [2024-12-09 13:13:58.305624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a0943ba0 cdw11:a0a0a094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.070 [2024-12-09 13:13:58.305641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.331 #43 NEW cov: 12384 ft: 15196 corp: 24/337b lim: 40 exec/s: 43 rss: 74Mb L: 17/40 MS: 1 CrossOver- 00:06:56.331 [2024-12-09 13:13:58.355339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a028a0a0 cdw11:a0942b94 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.331 [2024-12-09 13:13:58.355365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.331 #46 NEW cov: 12384 ft: 15206 corp: 25/346b lim: 40 exec/s: 46 rss: 74Mb L: 9/40 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:06:56.331 [2024-12-09 13:13:58.405559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:a071a094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.331 [2024-12-09 13:13:58.405591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.331 #47 NEW cov: 12384 ft: 15260 corp: 26/355b lim: 40 exec/s: 47 rss: 74Mb L: 9/40 MS: 1 ChangeByte- 00:06:56.331 [2024-12-09 13:13:58.455717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a094a0 cdw11:28a0942b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.331 [2024-12-09 13:13:58.455743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.331 #48 NEW cov: 12384 ft: 15269 corp: 27/364b lim: 40 exec/s: 48 rss: 74Mb L: 9/40 MS: 1 ShuffleBytes- 00:06:56.331 [2024-12-09 13:13:58.525988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a0a0a0 cdw11:01000a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.331 [2024-12-09 13:13:58.526016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.331 #50 NEW cov: 12384 ft: 15305 corp: 28/374b lim: 40 exec/s: 50 rss: 74Mb L: 10/40 MS: 2 EraseBytes-CrossOver- 00:06:56.592 [2024-12-09 13:13:58.596694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae0e0e0 cdw11:e0e03131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.592 [2024-12-09 13:13:58.596721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.592 [2024-12-09 13:13:58.596861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.592 [2024-12-09 13:13:58.596876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.592 [2024-12-09 13:13:58.597019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:313131e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.592 [2024-12-09 13:13:58.597035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.592 [2024-12-09 13:13:58.597174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.592 [2024-12-09 13:13:58.597190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.592 #56 NEW cov: 12384 ft: 15322 corp: 29/409b lim: 40 exec/s: 56 rss: 74Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:06:56.592 [2024-12-09 13:13:58.646270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:a0a00007 cdw11:800000a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.592 [2024-12-09 13:13:58.646295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.592 #57 NEW cov: 12384 ft: 15339 corp: 30/422b lim: 40 exec/s: 28 rss: 74Mb L: 13/40 MS: 1 ShuffleBytes- 00:06:56.592 #57 DONE cov: 12384 ft: 15339 corp: 30/422b lim: 40 exec/s: 28 rss: 74Mb 00:06:56.592 ###### Recommended dictionary. ###### 00:06:56.592 "\001\000\0122\216\231\007d" # Uses: 2 00:06:56.592 "\000\000\000\007" # Uses: 0 00:06:56.592 ###### End of recommended dictionary. ###### 00:06:56.592 Done 57 runs in 2 second(s) 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.592 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.593 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:56.853 [2024-12-09 13:13:58.839239] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:56.853 [2024-12-09 13:13:58.839305] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid299356 ] 00:06:56.853 [2024-12-09 13:13:59.053687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.853 [2024-12-09 13:13:59.086919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.113 [2024-12-09 13:13:59.146051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.113 [2024-12-09 13:13:59.162332] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:57.113 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.113 INFO: Seed: 1573856650 00:06:57.113 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:57.113 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:57.113 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:57.113 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.113 #2 INITED exec/s: 0 rss: 65Mb 00:06:57.113 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.113 This may also happen if the target rejected all inputs we tried so far 00:06:57.113 [2024-12-09 13:13:59.217985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.113 [2024-12-09 13:13:59.218016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.113 [2024-12-09 13:13:59.218078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.113 [2024-12-09 13:13:59.218095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.113 [2024-12-09 13:13:59.218154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.113 [2024-12-09 13:13:59.218169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.373 NEW_FUNC[1/716]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:57.373 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.373 #7 NEW cov: 12165 ft: 12147 corp: 2/27b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 5 ShuffleBytes-CopyPart-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:57.373 [2024-12-09 13:13:59.559245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.374 [2024-12-09 13:13:59.559301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.374 [2024-12-09 13:13:59.559391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.374 [2024-12-09 13:13:59.559419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.374 [2024-12-09 13:13:59.559506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.374 [2024-12-09 13:13:59.559532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.374 NEW_FUNC[1/1]: 0xfab318 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:06:57.374 #8 NEW cov: 12283 ft: 12880 corp: 3/53b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:06:57.634 [2024-12-09 13:13:59.628984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.629013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.629081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.629096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.634 #9 NEW cov: 12289 ft: 13389 corp: 4/74b lim: 40 exec/s: 0 rss: 72Mb L: 21/26 MS: 1 EraseBytes- 00:06:57.634 [2024-12-09 13:13:59.688954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.688982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 #12 NEW cov: 12374 ft: 14282 corp: 5/83b lim: 40 exec/s: 0 rss: 72Mb L: 9/26 MS: 3 ChangeBit-ChangeBit-CMP- DE: "G\000\000\000\000\000\000\000"- 00:06:57.634 [2024-12-09 13:13:59.729354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.729379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.729442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.729456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.729518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.729532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.634 #13 NEW cov: 12374 ft: 14344 corp: 6/109b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBit- 00:06:57.634 [2024-12-09 13:13:59.769683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.769709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.769784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff470000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.769798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.769861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.769875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.769936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.769950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.634 #14 NEW cov: 12374 ft: 14721 corp: 7/143b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:06:57.634 [2024-12-09 13:13:59.809607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.809631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.809653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:47000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.809664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.809683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.809693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.634 #15 NEW cov: 12374 ft: 14830 corp: 8/173b lim: 40 exec/s: 0 rss: 72Mb L: 30/34 MS: 1 CrossOver- 00:06:57.634 [2024-12-09 13:13:59.869797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.869822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.634 [2024-12-09 13:13:59.869884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffffff47 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.634 [2024-12-09 13:13:59.869899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.635 [2024-12-09 13:13:59.869959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.635 [2024-12-09 13:13:59.869973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.894 #16 NEW cov: 12374 ft: 14908 corp: 9/199b lim: 40 exec/s: 0 rss: 72Mb L: 26/34 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:06:57.894 [2024-12-09 13:13:59.929958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.929983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:13:59.930047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.930062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:13:59.930124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0a343c3b cdw11:c7f800ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.930138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.895 #17 NEW cov: 12374 ft: 15079 corp: 10/225b lim: 40 exec/s: 0 rss: 72Mb L: 26/34 MS: 1 CMP- DE: "\000\000\0124<;\307\370"- 00:06:57.895 [2024-12-09 13:13:59.990124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.990149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:13:59.990212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:00000a34 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.990227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:13:59.990291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:3c3bc7f8 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:13:59.990306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.895 #18 NEW cov: 12374 ft: 15152 corp: 11/251b lim: 40 exec/s: 0 rss: 72Mb L: 26/34 MS: 1 PersAutoDict- DE: "\000\000\0124<;\307\370"- 00:06:57.895 [2024-12-09 13:14:00.030242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.030269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:14:00.030332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff40 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.030347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:14:00.030409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.030423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.895 #19 NEW cov: 12374 ft: 15220 corp: 12/277b lim: 40 exec/s: 0 rss: 73Mb L: 26/34 MS: 1 ChangeByte- 00:06:57.895 [2024-12-09 13:14:00.070349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.070375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:14:00.070439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:47000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.070454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.895 [2024-12-09 13:14:00.070518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.070534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.895 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:57.895 #20 NEW cov: 12397 ft: 15319 corp: 13/308b lim: 40 exec/s: 0 rss: 73Mb L: 31/34 MS: 1 InsertByte- 00:06:57.895 [2024-12-09 13:14:00.130230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.895 [2024-12-09 13:14:00.130258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 #22 NEW cov: 12397 ft: 15352 corp: 14/323b lim: 40 exec/s: 0 rss: 73Mb L: 15/34 MS: 2 InsertRepeatedBytes-CMP- DE: "\000\000\0124?\314R\010"- 00:06:58.155 [2024-12-09 13:14:00.170672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.170698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.170764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.170779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.170846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.170860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.155 #23 NEW cov: 12397 ft: 15392 corp: 15/352b lim: 40 exec/s: 0 rss: 73Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:06:58.155 [2024-12-09 13:14:00.210605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.210632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.210695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.210709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.155 #24 NEW cov: 12397 ft: 15405 corp: 16/373b lim: 40 exec/s: 24 rss: 73Mb L: 21/34 MS: 1 ChangeBit- 00:06:58.155 [2024-12-09 13:14:00.271114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47ffff09 cdw11:34547152 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.271140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.271205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ea000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.271220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.271281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:47000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.271295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.271355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.271369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.155 #25 NEW cov: 12397 ft: 15443 corp: 17/411b lim: 40 exec/s: 25 rss: 73Mb L: 38/38 MS: 1 CMP- DE: "\377\377\0114TqR\352"- 00:06:58.155 [2024-12-09 13:14:00.311052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.311078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.311151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.311166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.155 [2024-12-09 13:14:00.311227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.311240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.155 #26 NEW cov: 12397 ft: 15473 corp: 18/437b lim: 40 exec/s: 26 rss: 73Mb L: 26/38 MS: 1 ChangeBit- 00:06:58.155 [2024-12-09 13:14:00.350882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:1a00ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.155 [2024-12-09 13:14:00.350911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.155 #27 NEW cov: 12397 ft: 15487 corp: 19/451b lim: 40 exec/s: 27 rss: 73Mb L: 14/38 MS: 1 CrossOver- 00:06:58.415 [2024-12-09 13:14:00.411011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.415 [2024-12-09 13:14:00.411036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.415 #28 NEW cov: 12397 ft: 15587 corp: 20/465b lim: 40 exec/s: 28 rss: 73Mb L: 14/38 MS: 1 CopyPart- 00:06:58.415 [2024-12-09 13:14:00.471509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.415 [2024-12-09 13:14:00.471534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.415 [2024-12-09 13:14:00.471597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.415 [2024-12-09 13:14:00.471612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.415 [2024-12-09 13:14:00.471675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:47ff0000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.415 [2024-12-09 13:14:00.471689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.415 #29 NEW cov: 12397 ft: 15612 corp: 21/491b lim: 40 exec/s: 29 rss: 73Mb L: 26/38 MS: 1 ShuffleBytes- 00:06:58.415 [2024-12-09 13:14:00.511798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.415 [2024-12-09 13:14:00.511824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.511891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.511906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.511967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.511981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.512044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:fffeffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.512058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.416 #30 NEW cov: 12397 ft: 15667 corp: 22/529b lim: 40 exec/s: 30 rss: 73Mb L: 38/38 MS: 1 CopyPart- 00:06:58.416 [2024-12-09 13:14:00.571789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.571815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.571881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffffff47 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.571895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.571960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000045 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.571977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.416 #31 NEW cov: 12397 ft: 15725 corp: 23/556b lim: 40 exec/s: 31 rss: 73Mb L: 27/38 MS: 1 InsertByte- 00:06:58.416 [2024-12-09 13:14:00.611893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.611920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.611987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff47 cdw11:00ffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.612002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.416 [2024-12-09 13:14:00.612065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dfff0000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.416 [2024-12-09 13:14:00.612079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.416 #32 NEW cov: 12397 ft: 15752 corp: 24/582b lim: 40 exec/s: 32 rss: 73Mb L: 26/38 MS: 1 ShuffleBytes- 00:06:58.676 [2024-12-09 13:14:00.672063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.672089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.672155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:47000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.672170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.672236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.672251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.676 #33 NEW cov: 12397 ft: 15767 corp: 25/613b lim: 40 exec/s: 33 rss: 73Mb L: 31/38 MS: 1 InsertByte- 00:06:58.676 [2024-12-09 13:14:00.712184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.712210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.712276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.712291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.712357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.712371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.676 #34 NEW cov: 12397 ft: 15794 corp: 26/642b lim: 40 exec/s: 34 rss: 73Mb L: 29/38 MS: 1 InsertRepeatedBytes- 00:06:58.676 [2024-12-09 13:14:00.751959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:bfff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.751985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 #35 NEW cov: 12397 ft: 15810 corp: 27/657b lim: 40 exec/s: 35 rss: 73Mb L: 15/38 MS: 1 ChangeBit- 00:06:58.676 [2024-12-09 13:14:00.812082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.812108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 #36 NEW cov: 12397 ft: 15846 corp: 28/666b lim: 40 exec/s: 36 rss: 73Mb L: 9/38 MS: 1 ShuffleBytes- 00:06:58.676 [2024-12-09 13:14:00.852539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.852566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.852638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.852654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.852719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0400ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.852733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.676 #37 NEW cov: 12397 ft: 15852 corp: 29/692b lim: 40 exec/s: 37 rss: 73Mb L: 26/38 MS: 1 ChangeBinInt- 00:06:58.676 [2024-12-09 13:14:00.892661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.892687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.892756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.892771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.676 [2024-12-09 13:14:00.892838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffdf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.676 [2024-12-09 13:14:00.892852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.936 #38 NEW cov: 12397 ft: 15879 corp: 30/716b lim: 40 exec/s: 38 rss: 74Mb L: 24/38 MS: 1 InsertRepeatedBytes- 00:06:58.936 [2024-12-09 13:14:00.952830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.936 [2024-12-09 13:14:00.952856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.936 [2024-12-09 13:14:00.952923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdfff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.936 [2024-12-09 13:14:00.952938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.936 [2024-12-09 13:14:00.953004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:47ff0000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.936 [2024-12-09 13:14:00.953018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.936 #39 NEW cov: 12397 ft: 15895 corp: 31/742b lim: 40 exec/s: 39 rss: 74Mb L: 26/38 MS: 1 CopyPart- 00:06:58.936 [2024-12-09 13:14:00.992601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:18d91aff cdw11:18d91aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.936 [2024-12-09 13:14:00.992630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.936 #43 NEW cov: 12397 ft: 15936 corp: 32/754b lim: 40 exec/s: 43 rss: 74Mb L: 12/38 MS: 4 CrossOver-ChangeBit-ChangeByte-CopyPart- 00:06:58.936 [2024-12-09 13:14:01.052784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000035 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.936 [2024-12-09 13:14:01.052810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.936 #44 NEW cov: 12397 ft: 15998 corp: 33/763b lim: 40 exec/s: 44 rss: 74Mb L: 9/38 MS: 1 ChangeByte- 00:06:58.936 [2024-12-09 13:14:01.093369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.937 [2024-12-09 13:14:01.093395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.937 [2024-12-09 13:14:01.093460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.937 [2024-12-09 13:14:01.093475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.937 [2024-12-09 13:14:01.093538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.937 [2024-12-09 13:14:01.093552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.937 [2024-12-09 13:14:01.093615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:afafafaf cdw11:afafafaf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.937 [2024-12-09 13:14:01.093629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.937 #45 NEW cov: 12397 ft: 16013 corp: 34/802b lim: 40 exec/s: 45 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:58.937 [2024-12-09 13:14:01.132981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.937 [2024-12-09 13:14:01.133007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.937 #46 NEW cov: 12397 ft: 16021 corp: 35/816b lim: 40 exec/s: 46 rss: 74Mb L: 14/39 MS: 1 EraseBytes- 00:06:59.198 [2024-12-09 13:14:01.193318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffcc cdw11:ffbfff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.198 [2024-12-09 13:14:01.193343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.198 [2024-12-09 13:14:01.193411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000a343f cdw11:cc52080a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.198 [2024-12-09 13:14:01.193425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.198 #47 NEW cov: 12397 ft: 16028 corp: 36/832b lim: 40 exec/s: 23 rss: 74Mb L: 16/39 MS: 1 InsertByte- 00:06:59.198 #47 DONE cov: 12397 ft: 16028 corp: 36/832b lim: 40 exec/s: 23 rss: 74Mb 00:06:59.198 ###### Recommended dictionary. ###### 00:06:59.198 "G\000\000\000\000\000\000\000" # Uses: 2 00:06:59.198 "\000\000\0124<;\307\370" # Uses: 1 00:06:59.198 "\000\000\0124?\314R\010" # Uses: 0 00:06:59.198 "\377\377\0114TqR\352" # Uses: 0 00:06:59.198 ###### End of recommended dictionary. ###### 00:06:59.198 Done 47 runs in 2 second(s) 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.198 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:59.198 [2024-12-09 13:14:01.388182] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:59.198 [2024-12-09 13:14:01.388251] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid299889 ] 00:06:59.459 [2024-12-09 13:14:01.592387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.459 [2024-12-09 13:14:01.625641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.459 [2024-12-09 13:14:01.684627] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.459 [2024-12-09 13:14:01.700949] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:59.718 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.718 INFO: Seed: 4110878171 00:06:59.718 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:06:59.718 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:06:59.718 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.719 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.719 #2 INITED exec/s: 0 rss: 65Mb 00:06:59.719 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.719 This may also happen if the target rejected all inputs we tried so far 00:06:59.719 [2024-12-09 13:14:01.760170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.719 [2024-12-09 13:14:01.760207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:59.978 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.978 #9 NEW cov: 12168 ft: 12149 corp: 2/15b lim: 40 exec/s: 0 rss: 72Mb L: 14/14 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:59.978 [2024-12-09 13:14:02.091098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.978 [2024-12-09 13:14:02.091144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 #10 NEW cov: 12281 ft: 12785 corp: 3/28b lim: 40 exec/s: 0 rss: 72Mb L: 13/14 MS: 1 EraseBytes- 00:06:59.978 [2024-12-09 13:14:02.151078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.978 [2024-12-09 13:14:02.151104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 #11 NEW cov: 12287 ft: 13131 corp: 4/42b lim: 40 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:59.979 [2024-12-09 13:14:02.191320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.979 [2024-12-09 13:14:02.191346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.979 [2024-12-09 13:14:02.191404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3e8e8e8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.979 [2024-12-09 13:14:02.191418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.979 #12 NEW cov: 12372 ft: 14003 corp: 5/60b lim: 40 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:00.239 [2024-12-09 13:14:02.231297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.231322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.239 #13 NEW cov: 12372 ft: 14128 corp: 6/73b lim: 40 exec/s: 0 rss: 73Mb L: 13/18 MS: 1 ShuffleBytes- 00:07:00.239 [2024-12-09 13:14:02.291453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a353 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.291477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.239 #14 NEW cov: 12372 ft: 14202 corp: 7/86b lim: 40 exec/s: 0 rss: 73Mb L: 13/18 MS: 1 ChangeBinInt- 00:07:00.239 [2024-12-09 13:14:02.331551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3e653 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.331575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.239 #20 NEW cov: 12372 ft: 14262 corp: 8/99b lim: 40 exec/s: 0 rss: 73Mb L: 13/18 MS: 1 ChangeByte- 00:07:00.239 [2024-12-09 13:14:02.392024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:e653a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.392050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.239 [2024-12-09 13:14:02.392106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.392120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.239 [2024-12-09 13:14:02.392170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3e8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.392186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.239 #21 NEW cov: 12372 ft: 14522 corp: 9/127b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 CrossOver- 00:07:00.239 [2024-12-09 13:14:02.452174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:62626262 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.452199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.239 [2024-12-09 13:14:02.452256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:62626262 cdw11:62626262 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.452270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.239 [2024-12-09 13:14:02.452325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:5d5c5c58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.239 [2024-12-09 13:14:02.452339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.500 #22 NEW cov: 12372 ft: 14595 corp: 10/153b lim: 40 exec/s: 0 rss: 73Mb L: 26/28 MS: 1 InsertRepeatedBytes- 00:07:00.500 [2024-12-09 13:14:02.512195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.512221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 [2024-12-09 13:14:02.512277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.512291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.500 #23 NEW cov: 12372 ft: 14672 corp: 11/170b lim: 40 exec/s: 0 rss: 73Mb L: 17/28 MS: 1 CrossOver- 00:07:00.500 [2024-12-09 13:14:02.552433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.552458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 [2024-12-09 13:14:02.552514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:53a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.552528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.500 [2024-12-09 13:14:02.552584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.552603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.500 #24 NEW cov: 12372 ft: 14682 corp: 12/196b lim: 40 exec/s: 0 rss: 73Mb L: 26/28 MS: 1 CrossOver- 00:07:00.500 [2024-12-09 13:14:02.592252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.592277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 #29 NEW cov: 12372 ft: 14716 corp: 13/204b lim: 40 exec/s: 0 rss: 73Mb L: 8/28 MS: 5 CopyPart-ShuffleBytes-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:00.500 [2024-12-09 13:14:02.632535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.632564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 [2024-12-09 13:14:02.632622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a0a3 cdw11:a3e8e8e8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.632637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.500 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:00.500 #30 NEW cov: 12395 ft: 14761 corp: 14/222b lim: 40 exec/s: 0 rss: 73Mb L: 18/28 MS: 1 ChangeBinInt- 00:07:00.500 [2024-12-09 13:14:02.672484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02002700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.672510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 #31 NEW cov: 12395 ft: 14781 corp: 15/230b lim: 40 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 ChangeByte- 00:07:00.500 [2024-12-09 13:14:02.732817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.732843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.500 [2024-12-09 13:14:02.732901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3e8e8e8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.500 [2024-12-09 13:14:02.732916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.761 #32 NEW cov: 12395 ft: 14795 corp: 16/249b lim: 40 exec/s: 32 rss: 73Mb L: 19/28 MS: 1 InsertByte- 00:07:00.761 [2024-12-09 13:14:02.772790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02002700 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.761 [2024-12-09 13:14:02.772815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.761 #33 NEW cov: 12395 ft: 14863 corp: 17/257b lim: 40 exec/s: 33 rss: 73Mb L: 8/28 MS: 1 ChangeBit- 00:07:00.761 [2024-12-09 13:14:02.832908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02003d00 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.761 [2024-12-09 13:14:02.832933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.761 #34 NEW cov: 12395 ft: 14871 corp: 18/265b lim: 40 exec/s: 34 rss: 73Mb L: 8/28 MS: 1 ChangeByte- 00:07:00.761 [2024-12-09 13:14:02.893400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.761 [2024-12-09 13:14:02.893425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.761 [2024-12-09 13:14:02.893482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:53a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.761 [2024-12-09 13:14:02.893495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.761 [2024-12-09 13:14:02.893549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a373a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.761 [2024-12-09 13:14:02.893563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.762 #35 NEW cov: 12395 ft: 14925 corp: 19/291b lim: 40 exec/s: 35 rss: 74Mb L: 26/28 MS: 1 ChangeByte- 00:07:00.762 [2024-12-09 13:14:02.953405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.762 [2024-12-09 13:14:02.953433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.762 [2024-12-09 13:14:02.953491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5d727272 cdw11:72725c5c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.762 [2024-12-09 13:14:02.953505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.762 #36 NEW cov: 12395 ft: 14947 corp: 20/310b lim: 40 exec/s: 36 rss: 74Mb L: 19/28 MS: 1 InsertRepeatedBytes- 00:07:00.762 [2024-12-09 13:14:02.993538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.762 [2024-12-09 13:14:02.993563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.762 [2024-12-09 13:14:02.993622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.762 [2024-12-09 13:14:02.993636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.023 #37 NEW cov: 12395 ft: 14985 corp: 21/333b lim: 40 exec/s: 37 rss: 74Mb L: 23/28 MS: 1 CopyPart- 00:07:01.023 [2024-12-09 13:14:03.053887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:e653a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.053911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.053967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a303a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.053981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.054038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.054052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.023 #38 NEW cov: 12395 ft: 15026 corp: 22/362b lim: 40 exec/s: 38 rss: 74Mb L: 29/29 MS: 1 InsertByte- 00:07:01.023 [2024-12-09 13:14:03.114038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:e653a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.114063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.114116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a307a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.114129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.114185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.114199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.023 #39 NEW cov: 12395 ft: 15038 corp: 23/391b lim: 40 exec/s: 39 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:01.023 [2024-12-09 13:14:03.173891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02003d00 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.173915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.023 #40 NEW cov: 12395 ft: 15060 corp: 24/399b lim: 40 exec/s: 40 rss: 74Mb L: 8/29 MS: 1 CopyPart- 00:07:01.023 [2024-12-09 13:14:03.234351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a30000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.234375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.234431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.234444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.023 [2024-12-09 13:14:03.234496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.023 [2024-12-09 13:14:03.234510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.023 #41 NEW cov: 12395 ft: 15066 corp: 25/428b lim: 40 exec/s: 41 rss: 74Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:01.284 [2024-12-09 13:14:03.274477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:62626262 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.274501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.274558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:62626262 cdw11:62626262 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.274572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.274644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:626262a3 cdw11:5d5c5c58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.274658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.284 #42 NEW cov: 12395 ft: 15140 corp: 26/454b lim: 40 exec/s: 42 rss: 74Mb L: 26/29 MS: 1 CopyPart- 00:07:01.284 [2024-12-09 13:14:03.334655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:e653a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.334679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.334737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a307a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.334751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.334805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.334819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.284 #43 NEW cov: 12395 ft: 15186 corp: 27/483b lim: 40 exec/s: 43 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:01.284 [2024-12-09 13:14:03.374459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02003d10 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.374484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.284 #44 NEW cov: 12395 ft: 15198 corp: 28/491b lim: 40 exec/s: 44 rss: 74Mb L: 8/29 MS: 1 ChangeBit- 00:07:01.284 [2024-12-09 13:14:03.414692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.414720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.414778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.414792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.284 #45 NEW cov: 12395 ft: 15214 corp: 29/508b lim: 40 exec/s: 45 rss: 74Mb L: 17/29 MS: 1 ChangeBit- 00:07:01.284 [2024-12-09 13:14:03.475072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:e653a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.475096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.475154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a307a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.475168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.284 [2024-12-09 13:14:03.475221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.284 [2024-12-09 13:14:03.475234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.284 #46 NEW cov: 12395 ft: 15225 corp: 30/538b lim: 40 exec/s: 46 rss: 74Mb L: 30/30 MS: 1 InsertByte- 00:07:01.546 [2024-12-09 13:14:03.535064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a330a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.535089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.535149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5d727272 cdw11:72725c5c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.535164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.546 #47 NEW cov: 12395 ft: 15241 corp: 31/557b lim: 40 exec/s: 47 rss: 74Mb L: 19/30 MS: 1 ChangeByte- 00:07:01.546 [2024-12-09 13:14:03.595365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.595390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.595445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a353a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.595460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.595514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.595528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.546 #48 NEW cov: 12395 ft: 15275 corp: 32/581b lim: 40 exec/s: 48 rss: 74Mb L: 24/30 MS: 1 EraseBytes- 00:07:01.546 [2024-12-09 13:14:03.635350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3305d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.635375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.635434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72a37272 cdw11:72725c5c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.635448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.546 #49 NEW cov: 12395 ft: 15291 corp: 33/600b lim: 40 exec/s: 49 rss: 75Mb L: 19/30 MS: 1 ShuffleBytes- 00:07:01.546 [2024-12-09 13:14:03.695513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.695538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.695604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.695618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.546 #50 NEW cov: 12395 ft: 15296 corp: 34/617b lim: 40 exec/s: 50 rss: 75Mb L: 17/30 MS: 1 ChangeBit- 00:07:01.546 [2024-12-09 13:14:03.755824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.755848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.755902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.755916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.546 [2024-12-09 13:14:03.755970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a353a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.546 [2024-12-09 13:14:03.755984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.546 #51 NEW cov: 12395 ft: 15303 corp: 35/645b lim: 40 exec/s: 25 rss: 75Mb L: 28/30 MS: 1 CrossOver- 00:07:01.546 #51 DONE cov: 12395 ft: 15303 corp: 35/645b lim: 40 exec/s: 25 rss: 75Mb 00:07:01.546 Done 51 runs in 2 second(s) 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.807 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:01.807 [2024-12-09 13:14:03.929298] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:01.807 [2024-12-09 13:14:03.929368] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300184 ] 00:07:02.068 [2024-12-09 13:14:04.136215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.068 [2024-12-09 13:14:04.170436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.068 [2024-12-09 13:14:04.229510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.068 [2024-12-09 13:14:04.245839] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:02.068 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.068 INFO: Seed: 2360899788 00:07:02.068 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:02.068 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:02.068 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:02.068 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.068 #2 INITED exec/s: 0 rss: 65Mb 00:07:02.068 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.068 This may also happen if the target rejected all inputs we tried so far 00:07:02.329 [2024-12-09 13:14:04.316524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.329 [2024-12-09 13:14:04.316562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.329 [2024-12-09 13:14:04.316700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.329 [2024-12-09 13:14:04.316717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.329 [2024-12-09 13:14:04.316851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.329 [2024-12-09 13:14:04.316868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.329 [2024-12-09 13:14:04.317000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.329 [2024-12-09 13:14:04.317016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.590 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:02.590 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.590 #3 NEW cov: 12156 ft: 12157 corp: 2/35b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:02.590 [2024-12-09 13:14:04.657320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.657360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.657496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.657512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.657643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.657660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.590 #8 NEW cov: 12269 ft: 13148 corp: 3/65b lim: 40 exec/s: 0 rss: 72Mb L: 30/34 MS: 5 ChangeByte-ChangeBit-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:02.590 [2024-12-09 13:14:04.707143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.707174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.707303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.707319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.590 #10 NEW cov: 12275 ft: 13688 corp: 4/84b lim: 40 exec/s: 0 rss: 72Mb L: 19/34 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:02.590 [2024-12-09 13:14:04.757771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.757801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.757947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.757965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.758095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.758111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.758246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.758262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.590 #11 NEW cov: 12360 ft: 13940 corp: 5/118b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:07:02.590 [2024-12-09 13:14:04.827890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.827918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.828071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.828087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.828225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.828241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.590 [2024-12-09 13:14:04.828371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.590 [2024-12-09 13:14:04.828388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.852 #12 NEW cov: 12360 ft: 14050 corp: 6/151b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 EraseBytes- 00:07:02.852 [2024-12-09 13:14:04.897710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.897739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:04.897882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.897899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.852 #13 NEW cov: 12360 ft: 14096 corp: 7/169b lim: 40 exec/s: 0 rss: 72Mb L: 18/34 MS: 1 EraseBytes- 00:07:02.852 [2024-12-09 13:14:04.968382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.968410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:04.968553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:04ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.968569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:04.968705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.968721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:04.968847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:04.968864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.852 #14 NEW cov: 12360 ft: 14160 corp: 8/203b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:02.852 [2024-12-09 13:14:05.018410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.018436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.018560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.018576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.018710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff060000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.018725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.018851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.018866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.852 #15 NEW cov: 12360 ft: 14185 corp: 9/236b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:02.852 [2024-12-09 13:14:05.088713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.088740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.088871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.088889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.089019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff07 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.089037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.852 [2024-12-09 13:14:05.089175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.852 [2024-12-09 13:14:05.089192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.112 #16 NEW cov: 12360 ft: 14233 corp: 10/269b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:03.112 [2024-12-09 13:14:05.138795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffd1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.138821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.138971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:d1d1d1ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.138987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.139127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.139144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.139286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.139302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.112 #17 NEW cov: 12360 ft: 14407 corp: 11/307b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:03.112 [2024-12-09 13:14:05.189017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.189045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.189188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.189212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.189354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.189371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.189495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.189511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.112 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:03.112 #18 NEW cov: 12383 ft: 14468 corp: 12/340b lim: 40 exec/s: 0 rss: 72Mb L: 33/38 MS: 1 CrossOver- 00:07:03.112 [2024-12-09 13:14:05.239170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.239197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.239327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe4ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.112 [2024-12-09 13:14:05.239346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.112 [2024-12-09 13:14:05.239478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.239494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.113 [2024-12-09 13:14:05.239631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.239647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.113 #19 NEW cov: 12383 ft: 14514 corp: 13/373b lim: 40 exec/s: 0 rss: 72Mb L: 33/38 MS: 1 ChangeByte- 00:07:03.113 [2024-12-09 13:14:05.289337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.289364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.113 [2024-12-09 13:14:05.289498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe4ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.289515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.113 [2024-12-09 13:14:05.289652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.289668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.113 [2024-12-09 13:14:05.289806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffe2ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.113 [2024-12-09 13:14:05.289822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.113 #20 NEW cov: 12383 ft: 14540 corp: 14/406b lim: 40 exec/s: 20 rss: 73Mb L: 33/38 MS: 1 ChangeByte- 00:07:03.373 [2024-12-09 13:14:05.359626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffd7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.359654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.359781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.359798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.359929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff07 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.359947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.360076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.360092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.373 #21 NEW cov: 12383 ft: 14582 corp: 15/439b lim: 40 exec/s: 21 rss: 73Mb L: 33/38 MS: 1 ChangeByte- 00:07:03.373 [2024-12-09 13:14:05.429566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.429595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.429732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.429747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.429873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:26262626 cdw11:00002626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.429890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.373 #27 NEW cov: 12383 ft: 14610 corp: 16/469b lim: 40 exec/s: 27 rss: 73Mb L: 30/38 MS: 1 CMP- DE: "\000\000"- 00:07:03.373 [2024-12-09 13:14:05.500016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.500044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.500177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe4ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.500193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.500322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.500339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.500478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffe2ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.500496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.373 #28 NEW cov: 12383 ft: 14638 corp: 17/502b lim: 40 exec/s: 28 rss: 73Mb L: 33/38 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:03.373 [2024-12-09 13:14:05.570192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff24ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.570219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.570366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffd7ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.570382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.570524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.570540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.373 [2024-12-09 13:14:05.570683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:0700ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.373 [2024-12-09 13:14:05.570701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.373 #29 NEW cov: 12383 ft: 14644 corp: 18/540b lim: 40 exec/s: 29 rss: 73Mb L: 38/38 MS: 1 CopyPart- 00:07:03.634 [2024-12-09 13:14:05.640447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.640475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.640622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffe4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.640640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.640778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff24ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.640796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.640928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffe2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.640944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 #30 NEW cov: 12383 ft: 14667 corp: 19/575b lim: 40 exec/s: 30 rss: 73Mb L: 35/38 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:03.634 [2024-12-09 13:14:05.690554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.690582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.690721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe4ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.690739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.690862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff21 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.690879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.691012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffe2ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.691031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 #31 NEW cov: 12383 ft: 14677 corp: 20/608b lim: 40 exec/s: 31 rss: 73Mb L: 33/38 MS: 1 ChangeByte- 00:07:03.634 [2024-12-09 13:14:05.740733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.740762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.740897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.740914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.741051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffe4 cdw11:ff24ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.741071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.741197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.741214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 #32 NEW cov: 12383 ft: 14734 corp: 21/641b lim: 40 exec/s: 32 rss: 73Mb L: 33/38 MS: 1 CopyPart- 00:07:03.634 [2024-12-09 13:14:05.790910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.790938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.791081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.791099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.791238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffe4ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.791255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.791390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.791406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 #33 NEW cov: 12383 ft: 14766 corp: 22/674b lim: 40 exec/s: 33 rss: 73Mb L: 33/38 MS: 1 CopyPart- 00:07:03.634 [2024-12-09 13:14:05.840789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00260026 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.840818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.840946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.840966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-12-09 13:14:05.841109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:26262626 cdw11:26000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.634 [2024-12-09 13:14:05.841128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.895 #34 NEW cov: 12383 ft: 14852 corp: 23/705b lim: 40 exec/s: 34 rss: 73Mb L: 31/38 MS: 1 InsertByte- 00:07:03.895 [2024-12-09 13:14:05.911210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffd7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.911237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:05.911374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.911393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:05.911525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.911543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:05.911683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0700ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.911701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.895 #35 NEW cov: 12383 ft: 14862 corp: 24/739b lim: 40 exec/s: 35 rss: 73Mb L: 34/38 MS: 1 CrossOver- 00:07:03.895 [2024-12-09 13:14:05.960979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.961007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:05.961139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0026e4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:05.961157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.895 #36 NEW cov: 12383 ft: 14876 corp: 25/757b lim: 40 exec/s: 36 rss: 73Mb L: 18/38 MS: 1 CrossOver- 00:07:03.895 [2024-12-09 13:14:06.031407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:06.031434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:06.031572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:06.031592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:06.031734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:06.031751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.895 #38 NEW cov: 12383 ft: 14890 corp: 26/781b lim: 40 exec/s: 38 rss: 73Mb L: 24/38 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:03.895 [2024-12-09 13:14:06.081758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:06.081788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:06.081925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.895 [2024-12-09 13:14:06.081942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.895 [2024-12-09 13:14:06.082067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.896 [2024-12-09 13:14:06.082084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.896 [2024-12-09 13:14:06.082215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.896 [2024-12-09 13:14:06.082232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.896 #39 NEW cov: 12383 ft: 14894 corp: 27/815b lim: 40 exec/s: 39 rss: 73Mb L: 34/38 MS: 1 ChangeByte- 00:07:03.896 [2024-12-09 13:14:06.131479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.896 [2024-12-09 13:14:06.131506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.896 [2024-12-09 13:14:06.131636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.896 [2024-12-09 13:14:06.131653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.157 #40 NEW cov: 12383 ft: 14917 corp: 28/833b lim: 40 exec/s: 40 rss: 73Mb L: 18/38 MS: 1 CopyPart- 00:07:04.157 [2024-12-09 13:14:06.202012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.202038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.202174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262726 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.202191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.202322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:26262626 cdw11:00002626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.202338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.157 #41 NEW cov: 12383 ft: 14955 corp: 29/863b lim: 40 exec/s: 41 rss: 73Mb L: 30/38 MS: 1 ChangeBit- 00:07:04.157 [2024-12-09 13:14:06.252221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.252247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.252369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.252386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.252523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.252538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.252681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.252698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.157 #42 NEW cov: 12383 ft: 14958 corp: 30/896b lim: 40 exec/s: 42 rss: 73Mb L: 33/38 MS: 1 CrossOver- 00:07:04.157 [2024-12-09 13:14:06.302453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.302481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.302625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.302642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.302768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.302785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.157 [2024-12-09 13:14:06.302926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.157 [2024-12-09 13:14:06.302941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.157 #43 NEW cov: 12383 ft: 14996 corp: 31/930b lim: 40 exec/s: 21 rss: 74Mb L: 34/38 MS: 1 ShuffleBytes- 00:07:04.157 #43 DONE cov: 12383 ft: 14996 corp: 31/930b lim: 40 exec/s: 21 rss: 74Mb 00:07:04.157 ###### Recommended dictionary. ###### 00:07:04.157 "\000\000" # Uses: 2 00:07:04.157 ###### End of recommended dictionary. ###### 00:07:04.157 Done 43 runs in 2 second(s) 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.418 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:04.418 [2024-12-09 13:14:06.492675] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:04.418 [2024-12-09 13:14:06.492752] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300711 ] 00:07:04.679 [2024-12-09 13:14:06.693590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.679 [2024-12-09 13:14:06.727393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.679 [2024-12-09 13:14:06.786395] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.679 [2024-12-09 13:14:06.802676] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:04.679 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.679 INFO: Seed: 621940266 00:07:04.679 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:04.679 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:04.679 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:04.679 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.679 #2 INITED exec/s: 0 rss: 66Mb 00:07:04.679 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.679 This may also happen if the target rejected all inputs we tried so far 00:07:04.679 [2024-12-09 13:14:06.860484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.679 [2024-12-09 13:14:06.860512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.679 [2024-12-09 13:14:06.860577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.679 [2024-12-09 13:14:06.860596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.939 NEW_FUNC[1/717]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:04.939 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.939 #17 NEW cov: 12150 ft: 12140 corp: 2/18b lim: 35 exec/s: 0 rss: 72Mb L: 17/17 MS: 5 ChangeBit-CMP-CopyPart-InsertByte-InsertRepeatedBytes- DE: "\001\000\000s"- 00:07:05.199 [2024-12-09 13:14:07.191444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.199 [2024-12-09 13:14:07.191533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.199 #20 NEW cov: 12270 ft: 13644 corp: 3/31b lim: 35 exec/s: 0 rss: 72Mb L: 13/17 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:05.199 NEW_FUNC[1/2]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:05.199 NEW_FUNC[2/2]: 0x1391868 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1769 00:07:05.199 #29 NEW cov: 12309 ft: 14059 corp: 4/39b lim: 35 exec/s: 0 rss: 72Mb L: 8/17 MS: 4 InsertByte-ShuffleBytes-PersAutoDict-CMP- DE: "\001\000\000s"-"\010\000"- 00:07:05.199 [2024-12-09 13:14:07.281374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.199 [2024-12-09 13:14:07.281403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.199 [2024-12-09 13:14:07.281467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.199 [2024-12-09 13:14:07.281483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.199 #30 NEW cov: 12394 ft: 14298 corp: 5/57b lim: 35 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertByte- 00:07:05.199 [2024-12-09 13:14:07.341562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.199 [2024-12-09 13:14:07.341593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.199 [2024-12-09 13:14:07.341654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.199 [2024-12-09 13:14:07.341669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.199 #31 NEW cov: 12394 ft: 14412 corp: 6/75b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 ChangeBit- 00:07:05.459 #32 NEW cov: 12394 ft: 14511 corp: 7/87b lim: 35 exec/s: 0 rss: 73Mb L: 12/18 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:05.459 [2024-12-09 13:14:07.462210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.462237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.459 [2024-12-09 13:14:07.462316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.462331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.459 [2024-12-09 13:14:07.462392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.462406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.459 [2024-12-09 13:14:07.462465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.462479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.459 #33 NEW cov: 12394 ft: 14929 corp: 8/115b lim: 35 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:05.459 [2024-12-09 13:14:07.501997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.502026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.459 NEW_FUNC[1/2]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:05.459 NEW_FUNC[2/2]: 0x1389ee8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1607 00:07:05.459 #34 NEW cov: 12451 ft: 15018 corp: 9/132b lim: 35 exec/s: 0 rss: 73Mb L: 17/28 MS: 1 CrossOver- 00:07:05.459 #35 NEW cov: 12451 ft: 15073 corp: 10/140b lim: 35 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 ChangeByte- 00:07:05.459 #36 NEW cov: 12451 ft: 15123 corp: 11/149b lim: 35 exec/s: 0 rss: 73Mb L: 9/28 MS: 1 InsertByte- 00:07:05.459 [2024-12-09 13:14:07.662278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000073 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.459 [2024-12-09 13:14:07.662307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.459 #37 NEW cov: 12451 ft: 15142 corp: 12/157b lim: 35 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 ShuffleBytes- 00:07:05.721 #38 NEW cov: 12451 ft: 15176 corp: 13/166b lim: 35 exec/s: 0 rss: 73Mb L: 9/28 MS: 1 InsertByte- 00:07:05.721 [2024-12-09 13:14:07.742624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.721 [2024-12-09 13:14:07.742652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.721 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:05.721 #39 NEW cov: 12474 ft: 15230 corp: 14/183b lim: 35 exec/s: 0 rss: 73Mb L: 17/28 MS: 1 ChangeByte- 00:07:05.721 [2024-12-09 13:14:07.802852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.721 [2024-12-09 13:14:07.802879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.721 #40 NEW cov: 12474 ft: 15246 corp: 15/200b lim: 35 exec/s: 0 rss: 73Mb L: 17/28 MS: 1 CopyPart- 00:07:05.721 #44 NEW cov: 12474 ft: 15258 corp: 16/210b lim: 35 exec/s: 44 rss: 73Mb L: 10/28 MS: 4 EraseBytes-ChangeBit-ChangeBit-PersAutoDict- DE: "\001\000\000\000"- 00:07:05.721 [2024-12-09 13:14:07.902824] ctrlr.c:1786:nvmf_ctrlr_set_features_host_identifier: *ERROR*: Set Features - Host Identifier not allowed 00:07:05.721 [2024-12-09 13:14:07.903057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST IDENTIFIER cid:4 cdw10:00000081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.721 [2024-12-09 13:14:07.903084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMMAND SEQUENCE ERROR (00/0c) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.721 NEW_FUNC[1/2]: 0x475c08 in feat_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:372 00:07:05.722 NEW_FUNC[2/2]: 0x1392688 in nvmf_ctrlr_set_features_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1783 00:07:05.722 #45 NEW cov: 12493 ft: 15316 corp: 17/220b lim: 35 exec/s: 45 rss: 73Mb L: 10/28 MS: 1 ChangeBit- 00:07:05.983 #46 NEW cov: 12493 ft: 15327 corp: 18/229b lim: 35 exec/s: 46 rss: 73Mb L: 9/28 MS: 1 PersAutoDict- DE: "\010\000"- 00:07:05.983 [2024-12-09 13:14:08.023313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.984 [2024-12-09 13:14:08.023338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.984 #47 NEW cov: 12493 ft: 15338 corp: 19/241b lim: 35 exec/s: 47 rss: 73Mb L: 12/28 MS: 1 ShuffleBytes- 00:07:05.984 [2024-12-09 13:14:08.083462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000073 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.984 [2024-12-09 13:14:08.083487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.984 #48 NEW cov: 12493 ft: 15355 corp: 20/250b lim: 35 exec/s: 48 rss: 74Mb L: 9/28 MS: 1 InsertByte- 00:07:05.984 #49 NEW cov: 12493 ft: 15360 corp: 21/258b lim: 35 exec/s: 49 rss: 74Mb L: 8/28 MS: 1 ShuffleBytes- 00:07:05.984 [2024-12-09 13:14:08.183754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.984 [2024-12-09 13:14:08.183779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.984 #50 NEW cov: 12493 ft: 15367 corp: 22/270b lim: 35 exec/s: 50 rss: 74Mb L: 12/28 MS: 1 ChangeByte- 00:07:06.244 [2024-12-09 13:14:08.244400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.244429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.244 [2024-12-09 13:14:08.244489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.244503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.244 [2024-12-09 13:14:08.244565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.244579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.244 [2024-12-09 13:14:08.244643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.244657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.244 #51 NEW cov: 12493 ft: 15397 corp: 23/298b lim: 35 exec/s: 51 rss: 74Mb L: 28/28 MS: 1 CrossOver- 00:07:06.244 [2024-12-09 13:14:08.304554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.304581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.244 [2024-12-09 13:14:08.304702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.304719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.244 [2024-12-09 13:14:08.304780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.304793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.244 #52 NEW cov: 12493 ft: 15520 corp: 24/327b lim: 35 exec/s: 52 rss: 74Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:06.244 [2024-12-09 13:14:08.364246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.364273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.244 #53 NEW cov: 12493 ft: 15537 corp: 25/340b lim: 35 exec/s: 53 rss: 74Mb L: 13/29 MS: 1 ShuffleBytes- 00:07:06.244 [2024-12-09 13:14:08.404520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.244 [2024-12-09 13:14:08.404547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.244 #54 NEW cov: 12493 ft: 15585 corp: 26/357b lim: 35 exec/s: 54 rss: 74Mb L: 17/29 MS: 1 CrossOver- 00:07:06.244 #55 NEW cov: 12493 ft: 15590 corp: 27/366b lim: 35 exec/s: 55 rss: 74Mb L: 9/29 MS: 1 CMP- DE: "\377\377"- 00:07:06.504 #56 NEW cov: 12493 ft: 15610 corp: 28/376b lim: 35 exec/s: 56 rss: 74Mb L: 10/29 MS: 1 PersAutoDict- DE: "\377\377"- 00:07:06.504 #58 NEW cov: 12493 ft: 15620 corp: 29/385b lim: 35 exec/s: 58 rss: 74Mb L: 9/29 MS: 2 EraseBytes-PersAutoDict- DE: "\001\000\000\000"- 00:07:06.504 [2024-12-09 13:14:08.584979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.504 [2024-12-09 13:14:08.585005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.504 [2024-12-09 13:14:08.585069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.504 [2024-12-09 13:14:08.585086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.504 #59 NEW cov: 12493 ft: 15625 corp: 30/399b lim: 35 exec/s: 59 rss: 74Mb L: 14/29 MS: 1 InsertByte- 00:07:06.504 [2024-12-09 13:14:08.624982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.504 [2024-12-09 13:14:08.625007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.504 #60 NEW cov: 12493 ft: 15629 corp: 31/409b lim: 35 exec/s: 60 rss: 74Mb L: 10/29 MS: 1 ShuffleBytes- 00:07:06.504 [2024-12-09 13:14:08.685298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.504 [2024-12-09 13:14:08.685324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.505 [2024-12-09 13:14:08.685402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.505 [2024-12-09 13:14:08.685417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.505 #61 NEW cov: 12493 ft: 15666 corp: 32/427b lim: 35 exec/s: 61 rss: 74Mb L: 18/29 MS: 1 ChangeBinInt- 00:07:06.505 [2024-12-09 13:14:08.745506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.505 [2024-12-09 13:14:08.745534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.766 #62 NEW cov: 12493 ft: 15685 corp: 33/445b lim: 35 exec/s: 62 rss: 74Mb L: 18/29 MS: 1 InsertByte- 00:07:06.766 [2024-12-09 13:14:08.785443] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.766 [2024-12-09 13:14:08.785469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.766 #66 NEW cov: 12493 ft: 15689 corp: 34/458b lim: 35 exec/s: 66 rss: 74Mb L: 13/29 MS: 4 ChangeBit-ShuffleBytes-PersAutoDict-InsertRepeatedBytes- DE: "\001\000\000s"- 00:07:06.766 [2024-12-09 13:14:08.825702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.766 [2024-12-09 13:14:08.825729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.766 [2024-12-09 13:14:08.825808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.766 [2024-12-09 13:14:08.825823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.766 [2024-12-09 13:14:08.865636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.766 [2024-12-09 13:14:08.865663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.766 #68 NEW cov: 12493 ft: 15692 corp: 35/469b lim: 35 exec/s: 34 rss: 74Mb L: 11/29 MS: 2 CMP-EraseBytes- DE: "\001\000\0129M\021\341\250"- 00:07:06.766 #68 DONE cov: 12493 ft: 15692 corp: 35/469b lim: 35 exec/s: 34 rss: 74Mb 00:07:06.766 ###### Recommended dictionary. ###### 00:07:06.766 "\001\000\000s" # Uses: 2 00:07:06.766 "\010\000" # Uses: 1 00:07:06.766 "\001\000\000\000" # Uses: 2 00:07:06.766 "\377\377" # Uses: 1 00:07:06.766 "\001\000\0129M\021\341\250" # Uses: 0 00:07:06.766 ###### End of recommended dictionary. ###### 00:07:06.766 Done 68 runs in 2 second(s) 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:06.766 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:06.766 13:14:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:06.766 13:14:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.027 13:14:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.027 13:14:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.027 13:14:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:07.027 [2024-12-09 13:14:09.037781] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:07.027 [2024-12-09 13:14:09.037850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301208 ] 00:07:07.027 [2024-12-09 13:14:09.239455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.288 [2024-12-09 13:14:09.273393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.288 [2024-12-09 13:14:09.332455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.288 [2024-12-09 13:14:09.348780] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:07.288 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.288 INFO: Seed: 3169928684 00:07:07.288 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:07.288 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:07.288 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:07.288 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.288 #2 INITED exec/s: 0 rss: 65Mb 00:07:07.288 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.288 This may also happen if the target rejected all inputs we tried so far 00:07:07.549 NEW_FUNC[1/704]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:07.549 NEW_FUNC[2/704]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:07.549 #5 NEW cov: 12028 ft: 12029 corp: 2/32b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:07.549 [2024-12-09 13:14:09.745852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.549 [2024-12-09 13:14:09.745901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.810 NEW_FUNC[1/14]: 0x194e758 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:07:07.810 NEW_FUNC[2/14]: 0x194e998 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:07:07.810 #11 NEW cov: 12289 ft: 13060 corp: 3/67b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:07.810 #12 NEW cov: 12295 ft: 13370 corp: 4/101b lim: 35 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 CrossOver- 00:07:07.810 #13 NEW cov: 12380 ft: 13690 corp: 5/133b lim: 35 exec/s: 0 rss: 72Mb L: 32/35 MS: 1 CrossOver- 00:07:07.810 #14 NEW cov: 12380 ft: 14598 corp: 6/158b lim: 35 exec/s: 0 rss: 72Mb L: 25/35 MS: 1 EraseBytes- 00:07:07.810 #15 NEW cov: 12380 ft: 14724 corp: 7/186b lim: 35 exec/s: 0 rss: 72Mb L: 28/35 MS: 1 EraseBytes- 00:07:07.810 #16 NEW cov: 12380 ft: 14769 corp: 8/215b lim: 35 exec/s: 0 rss: 72Mb L: 29/35 MS: 1 InsertByte- 00:07:08.071 #22 NEW cov: 12380 ft: 14854 corp: 9/247b lim: 35 exec/s: 0 rss: 72Mb L: 32/35 MS: 1 ChangeBit- 00:07:08.071 #23 NEW cov: 12380 ft: 14892 corp: 10/275b lim: 35 exec/s: 0 rss: 72Mb L: 28/35 MS: 1 CopyPart- 00:07:08.071 #24 NEW cov: 12380 ft: 14975 corp: 11/306b lim: 35 exec/s: 0 rss: 72Mb L: 31/35 MS: 1 CopyPart- 00:07:08.071 [2024-12-09 13:14:10.196702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000086 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.071 [2024-12-09 13:14:10.196735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.071 #25 NEW cov: 12380 ft: 15019 corp: 12/338b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:07:08.071 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:08.071 #26 NEW cov: 12403 ft: 15204 corp: 13/373b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:08.332 #27 NEW cov: 12403 ft: 15242 corp: 14/398b lim: 35 exec/s: 0 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:07:08.332 #28 NEW cov: 12403 ft: 15244 corp: 15/424b lim: 35 exec/s: 28 rss: 73Mb L: 26/35 MS: 1 InsertByte- 00:07:08.332 [2024-12-09 13:14:10.437621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.332 [2024-12-09 13:14:10.437651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.332 #29 NEW cov: 12403 ft: 15292 corp: 16/459b lim: 35 exec/s: 29 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:08.332 #30 NEW cov: 12403 ft: 15298 corp: 17/490b lim: 35 exec/s: 30 rss: 73Mb L: 31/35 MS: 1 ShuffleBytes- 00:07:08.332 #31 NEW cov: 12403 ft: 15461 corp: 18/508b lim: 35 exec/s: 31 rss: 73Mb L: 18/35 MS: 1 CrossOver- 00:07:08.593 #32 NEW cov: 12403 ft: 15488 corp: 19/537b lim: 35 exec/s: 32 rss: 73Mb L: 29/35 MS: 1 CopyPart- 00:07:08.593 #33 NEW cov: 12403 ft: 15572 corp: 20/553b lim: 35 exec/s: 33 rss: 73Mb L: 16/35 MS: 1 CrossOver- 00:07:08.593 #34 NEW cov: 12403 ft: 15590 corp: 21/585b lim: 35 exec/s: 34 rss: 73Mb L: 32/35 MS: 1 CopyPart- 00:07:08.593 [2024-12-09 13:14:10.738206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.593 [2024-12-09 13:14:10.738233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.593 #35 NEW cov: 12403 ft: 15615 corp: 22/611b lim: 35 exec/s: 35 rss: 73Mb L: 26/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:08.853 #36 NEW cov: 12403 ft: 15638 corp: 23/640b lim: 35 exec/s: 36 rss: 73Mb L: 29/35 MS: 1 ChangeBit- 00:07:08.853 #37 NEW cov: 12403 ft: 15652 corp: 24/662b lim: 35 exec/s: 37 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:07:08.853 #38 NEW cov: 12403 ft: 15700 corp: 25/690b lim: 35 exec/s: 38 rss: 73Mb L: 28/35 MS: 1 ChangeBit- 00:07:08.853 [2024-12-09 13:14:10.938822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000086 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.853 [2024-12-09 13:14:10.938849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.853 #39 NEW cov: 12403 ft: 15705 corp: 26/722b lim: 35 exec/s: 39 rss: 74Mb L: 32/35 MS: 1 ChangeBit- 00:07:08.853 #40 NEW cov: 12403 ft: 15755 corp: 27/748b lim: 35 exec/s: 40 rss: 74Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:08.853 [2024-12-09 13:14:11.039383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.854 [2024-12-09 13:14:11.039408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.854 #41 NEW cov: 12403 ft: 15763 corp: 28/783b lim: 35 exec/s: 41 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:09.115 #42 NEW cov: 12403 ft: 15767 corp: 29/812b lim: 35 exec/s: 42 rss: 74Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:09.115 #43 NEW cov: 12403 ft: 15834 corp: 30/840b lim: 35 exec/s: 43 rss: 74Mb L: 28/35 MS: 1 CopyPart- 00:07:09.115 #44 NEW cov: 12403 ft: 15845 corp: 31/858b lim: 35 exec/s: 44 rss: 74Mb L: 18/35 MS: 1 EraseBytes- 00:07:09.115 [2024-12-09 13:14:11.239993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.115 [2024-12-09 13:14:11.240020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.115 #45 NEW cov: 12403 ft: 15874 corp: 32/893b lim: 35 exec/s: 45 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:09.115 [2024-12-09 13:14:11.279794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.115 [2024-12-09 13:14:11.279820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.115 #46 NEW cov: 12403 ft: 15888 corp: 33/919b lim: 35 exec/s: 46 rss: 74Mb L: 26/35 MS: 1 CMP- DE: "\377\377\011:\271\324\320V"- 00:07:09.115 [2024-12-09 13:14:11.320199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.115 [2024-12-09 13:14:11.320224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.376 #47 NEW cov: 12403 ft: 15894 corp: 34/954b lim: 35 exec/s: 47 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:07:09.376 #48 NEW cov: 12403 ft: 15907 corp: 35/983b lim: 35 exec/s: 24 rss: 74Mb L: 29/35 MS: 1 CrossOver- 00:07:09.377 #48 DONE cov: 12403 ft: 15907 corp: 35/983b lim: 35 exec/s: 24 rss: 74Mb 00:07:09.377 ###### Recommended dictionary. ###### 00:07:09.377 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:09.377 "\377\377\011:\271\324\320V" # Uses: 0 00:07:09.377 ###### End of recommended dictionary. ###### 00:07:09.377 Done 48 runs in 2 second(s) 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.377 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:09.377 [2024-12-09 13:14:11.554734] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:09.377 [2024-12-09 13:14:11.554804] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301529 ] 00:07:09.639 [2024-12-09 13:14:11.760372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.639 [2024-12-09 13:14:11.794593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.639 [2024-12-09 13:14:11.853645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.639 [2024-12-09 13:14:11.869967] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:09.900 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.900 INFO: Seed: 1395971779 00:07:09.900 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:09.900 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:09.900 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.900 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.900 #2 INITED exec/s: 0 rss: 66Mb 00:07:09.900 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.900 This may also happen if the target rejected all inputs we tried so far 00:07:09.900 [2024-12-09 13:14:11.935271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.900 [2024-12-09 13:14:11.935303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.161 NEW_FUNC[1/717]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:10.161 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.161 #30 NEW cov: 12224 ft: 12202 corp: 2/25b lim: 105 exec/s: 0 rss: 72Mb L: 24/24 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:10.161 [2024-12-09 13:14:12.276182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.161 [2024-12-09 13:14:12.276233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.161 #36 NEW cov: 12354 ft: 12932 corp: 3/49b lim: 105 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeByte- 00:07:10.161 [2024-12-09 13:14:12.336172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.161 [2024-12-09 13:14:12.336200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.161 #42 NEW cov: 12360 ft: 13272 corp: 4/73b lim: 105 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeBinInt- 00:07:10.161 [2024-12-09 13:14:12.376287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.161 [2024-12-09 13:14:12.376314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.161 #44 NEW cov: 12445 ft: 13521 corp: 5/98b lim: 105 exec/s: 0 rss: 72Mb L: 25/25 MS: 2 ChangeBit-CrossOver- 00:07:10.422 [2024-12-09 13:14:12.416639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.416666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.416725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.416742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.416801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.416817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.422 #46 NEW cov: 12445 ft: 14058 corp: 6/163b lim: 105 exec/s: 0 rss: 72Mb L: 65/65 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:10.422 [2024-12-09 13:14:12.456654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9476562639758001027 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.456681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.456738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.456766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.422 #52 NEW cov: 12445 ft: 14501 corp: 7/210b lim: 105 exec/s: 0 rss: 72Mb L: 47/65 MS: 1 InsertRepeatedBytes- 00:07:10.422 [2024-12-09 13:14:12.496734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9476562639758001027 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.496761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.496816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.496832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.422 #53 NEW cov: 12445 ft: 14553 corp: 8/257b lim: 105 exec/s: 0 rss: 72Mb L: 47/65 MS: 1 ChangeByte- 00:07:10.422 [2024-12-09 13:14:12.556846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.556873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 #59 NEW cov: 12445 ft: 14589 corp: 9/284b lim: 105 exec/s: 0 rss: 72Mb L: 27/65 MS: 1 InsertRepeatedBytes- 00:07:10.422 [2024-12-09 13:14:12.596914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.596942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 #60 NEW cov: 12445 ft: 14635 corp: 10/309b lim: 105 exec/s: 0 rss: 72Mb L: 25/65 MS: 1 ShuffleBytes- 00:07:10.422 [2024-12-09 13:14:12.657332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.657359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.657402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.657417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.422 [2024-12-09 13:14:12.657475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.422 [2024-12-09 13:14:12.657492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.682 #61 NEW cov: 12445 ft: 14752 corp: 11/374b lim: 105 exec/s: 0 rss: 73Mb L: 65/65 MS: 1 ChangeByte- 00:07:10.682 [2024-12-09 13:14:12.717370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.717397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.682 [2024-12-09 13:14:12.717455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.717472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.682 #69 NEW cov: 12445 ft: 14764 corp: 12/422b lim: 105 exec/s: 0 rss: 73Mb L: 48/65 MS: 3 InsertByte-CrossOver-InsertRepeatedBytes- 00:07:10.682 [2024-12-09 13:14:12.757372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:45080144510976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.757400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.682 #70 NEW cov: 12445 ft: 14819 corp: 13/447b lim: 105 exec/s: 0 rss: 73Mb L: 25/65 MS: 1 ChangeByte- 00:07:10.682 [2024-12-09 13:14:12.817542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.817570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.682 #73 NEW cov: 12445 ft: 14827 corp: 14/471b lim: 105 exec/s: 0 rss: 73Mb L: 24/65 MS: 3 ChangeByte-ChangeByte-CrossOver- 00:07:10.682 [2024-12-09 13:14:12.857904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.857931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.682 [2024-12-09 13:14:12.857985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777716137261 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.858002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.682 [2024-12-09 13:14:12.858058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.858073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.682 #74 NEW cov: 12445 ft: 14838 corp: 15/536b lim: 105 exec/s: 0 rss: 73Mb L: 65/65 MS: 1 ChangeByte- 00:07:10.682 [2024-12-09 13:14:12.917821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.682 [2024-12-09 13:14:12.917849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.942 #75 NEW cov: 12445 ft: 14900 corp: 16/561b lim: 105 exec/s: 75 rss: 73Mb L: 25/65 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\003"- 00:07:10.942 [2024-12-09 13:14:12.958303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:12.958331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.942 [2024-12-09 13:14:12.958380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:12.958395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.942 [2024-12-09 13:14:12.958452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:12.958468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.942 [2024-12-09 13:14:12.958524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:12.958539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.942 #77 NEW cov: 12445 ft: 15401 corp: 17/652b lim: 105 exec/s: 77 rss: 73Mb L: 91/91 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:10.942 [2024-12-09 13:14:12.998020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:12.998047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.942 #78 NEW cov: 12445 ft: 15504 corp: 18/684b lim: 105 exec/s: 78 rss: 73Mb L: 32/91 MS: 1 InsertRepeatedBytes- 00:07:10.942 [2024-12-09 13:14:13.038144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:13.038172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.942 #79 NEW cov: 12445 ft: 15514 corp: 19/711b lim: 105 exec/s: 79 rss: 73Mb L: 27/91 MS: 1 ShuffleBytes- 00:07:10.942 [2024-12-09 13:14:13.098355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5642533480229129806 len:20047 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.942 [2024-12-09 13:14:13.098382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.943 #81 NEW cov: 12445 ft: 15587 corp: 20/733b lim: 105 exec/s: 81 rss: 73Mb L: 22/91 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:10.943 [2024-12-09 13:14:13.138787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.943 [2024-12-09 13:14:13.138814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.943 [2024-12-09 13:14:13.138883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.943 [2024-12-09 13:14:13.138899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.943 [2024-12-09 13:14:13.138957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.943 [2024-12-09 13:14:13.138972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.943 [2024-12-09 13:14:13.139028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.943 [2024-12-09 13:14:13.139048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.943 #82 NEW cov: 12445 ft: 15591 corp: 21/834b lim: 105 exec/s: 82 rss: 73Mb L: 101/101 MS: 1 CopyPart- 00:07:11.203 [2024-12-09 13:14:13.198638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.198665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.203 #83 NEW cov: 12445 ft: 15607 corp: 22/859b lim: 105 exec/s: 83 rss: 73Mb L: 25/101 MS: 1 InsertByte- 00:07:11.203 [2024-12-09 13:14:13.239101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.239128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.239194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.239211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.239271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.239286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.239345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744070186336255 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.239360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.203 #84 NEW cov: 12445 ft: 15612 corp: 23/955b lim: 105 exec/s: 84 rss: 73Mb L: 96/101 MS: 1 InsertRepeatedBytes- 00:07:11.203 [2024-12-09 13:14:13.278844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.278873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.203 #85 NEW cov: 12445 ft: 15628 corp: 24/979b lim: 105 exec/s: 85 rss: 73Mb L: 24/101 MS: 1 ChangeBit- 00:07:11.203 [2024-12-09 13:14:13.339163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10125080986099352451 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.339191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.339232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.339248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.203 #86 NEW cov: 12445 ft: 15654 corp: 25/1026b lim: 105 exec/s: 86 rss: 73Mb L: 47/101 MS: 1 ChangeBinInt- 00:07:11.203 [2024-12-09 13:14:13.379358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.379384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.379422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.379439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.203 [2024-12-09 13:14:13.379500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.203 [2024-12-09 13:14:13.379530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.203 #87 NEW cov: 12445 ft: 15714 corp: 26/1102b lim: 105 exec/s: 87 rss: 73Mb L: 76/101 MS: 1 InsertRepeatedBytes- 00:07:11.204 [2024-12-09 13:14:13.419620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.204 [2024-12-09 13:14:13.419647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.204 [2024-12-09 13:14:13.419717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:35724 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.204 [2024-12-09 13:14:13.419733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.204 [2024-12-09 13:14:13.419790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10055284024492657547 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.204 [2024-12-09 13:14:13.419806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.204 [2024-12-09 13:14:13.419864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.204 [2024-12-09 13:14:13.419880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.204 #88 NEW cov: 12445 ft: 15728 corp: 27/1190b lim: 105 exec/s: 88 rss: 73Mb L: 88/101 MS: 1 InsertRepeatedBytes- 00:07:11.464 [2024-12-09 13:14:13.459389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.459417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.464 #89 NEW cov: 12445 ft: 15781 corp: 28/1215b lim: 105 exec/s: 89 rss: 73Mb L: 25/101 MS: 1 ChangeBinInt- 00:07:11.464 [2024-12-09 13:14:13.499489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.499517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.464 #90 NEW cov: 12445 ft: 15785 corp: 29/1247b lim: 105 exec/s: 90 rss: 73Mb L: 32/101 MS: 1 CrossOver- 00:07:11.464 [2024-12-09 13:14:13.560055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.560082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.464 [2024-12-09 13:14:13.560133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.560149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.464 [2024-12-09 13:14:13.560206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.560221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.464 [2024-12-09 13:14:13.560280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744070186336255 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.560298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.464 #91 NEW cov: 12445 ft: 15796 corp: 30/1349b lim: 105 exec/s: 91 rss: 73Mb L: 102/102 MS: 1 CopyPart- 00:07:11.464 [2024-12-09 13:14:13.619848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.619875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.464 #92 NEW cov: 12445 ft: 15804 corp: 31/1373b lim: 105 exec/s: 92 rss: 74Mb L: 24/102 MS: 1 ChangeByte- 00:07:11.464 [2024-12-09 13:14:13.680045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.464 [2024-12-09 13:14:13.680072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.725 #93 NEW cov: 12445 ft: 15837 corp: 32/1398b lim: 105 exec/s: 93 rss: 74Mb L: 25/102 MS: 1 ShuffleBytes- 00:07:11.726 [2024-12-09 13:14:13.740192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.740220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.726 #94 NEW cov: 12445 ft: 15845 corp: 33/1422b lim: 105 exec/s: 94 rss: 74Mb L: 24/102 MS: 1 ChangeBit- 00:07:11.726 [2024-12-09 13:14:13.800473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.800501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.726 [2024-12-09 13:14:13.800554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777716137261 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.800572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.726 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:11.726 #95 NEW cov: 12468 ft: 15871 corp: 34/1479b lim: 105 exec/s: 95 rss: 74Mb L: 57/102 MS: 1 EraseBytes- 00:07:11.726 [2024-12-09 13:14:13.860739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3255307777713450285 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.860767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.726 [2024-12-09 13:14:13.860806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3255307777716137261 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.860823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.726 [2024-12-09 13:14:13.860878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3255308108425932077 len:11566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.860895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.726 #96 NEW cov: 12468 ft: 15878 corp: 35/1545b lim: 105 exec/s: 96 rss: 74Mb L: 66/102 MS: 1 InsertByte- 00:07:11.726 [2024-12-09 13:14:13.900751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9476562639758001027 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.900778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.726 [2024-12-09 13:14:13.900840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.726 [2024-12-09 13:14:13.900857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.726 #97 NEW cov: 12468 ft: 15888 corp: 36/1592b lim: 105 exec/s: 48 rss: 74Mb L: 47/102 MS: 1 ShuffleBytes- 00:07:11.726 #97 DONE cov: 12468 ft: 15888 corp: 36/1592b lim: 105 exec/s: 48 rss: 74Mb 00:07:11.726 ###### Recommended dictionary. ###### 00:07:11.726 "\001\000\000\000\000\000\000\003" # Uses: 0 00:07:11.726 ###### End of recommended dictionary. ###### 00:07:11.726 Done 97 runs in 2 second(s) 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.987 13:14:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:11.987 [2024-12-09 13:14:14.093203] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:11.987 [2024-12-09 13:14:14.093286] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302069 ] 00:07:12.248 [2024-12-09 13:14:14.291944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.248 [2024-12-09 13:14:14.324965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.248 [2024-12-09 13:14:14.383782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.248 [2024-12-09 13:14:14.400093] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:12.248 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.248 INFO: Seed: 3924955945 00:07:12.248 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:12.248 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:12.248 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:12.248 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.248 #2 INITED exec/s: 0 rss: 65Mb 00:07:12.248 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.248 This may also happen if the target rejected all inputs we tried so far 00:07:12.248 [2024-12-09 13:14:14.459201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.248 [2024-12-09 13:14:14.459236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.248 [2024-12-09 13:14:14.459306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.248 [2024-12-09 13:14:14.459325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.770 NEW_FUNC[1/718]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:12.770 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.770 #7 NEW cov: 12259 ft: 12263 corp: 2/66b lim: 120 exec/s: 0 rss: 72Mb L: 65/65 MS: 5 CopyPart-CrossOver-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:12.770 [2024-12-09 13:14:14.790365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.790434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:14.790548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.790584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.770 #8 NEW cov: 12376 ft: 12887 corp: 3/131b lim: 120 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 ShuffleBytes- 00:07:12.770 [2024-12-09 13:14:14.860190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013126307101 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.860218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:14.860262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.860278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.770 #18 NEW cov: 12382 ft: 13182 corp: 4/187b lim: 120 exec/s: 0 rss: 72Mb L: 56/65 MS: 5 ShuffleBytes-ChangeBinInt-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:12.770 [2024-12-09 13:14:14.900254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.900284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:14.900341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.900356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.770 #19 NEW cov: 12467 ft: 13440 corp: 5/252b lim: 120 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 CrossOver- 00:07:12.770 [2024-12-09 13:14:14.960606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.960638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:14.960672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.960688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:14.960744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:14.960759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.770 #25 NEW cov: 12467 ft: 13871 corp: 6/339b lim: 120 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:12.770 [2024-12-09 13:14:15.000546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:15.000574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.770 [2024-12-09 13:14:15.000646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.770 [2024-12-09 13:14:15.000664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.037 #26 NEW cov: 12467 ft: 14021 corp: 7/405b lim: 120 exec/s: 0 rss: 72Mb L: 66/87 MS: 1 InsertByte- 00:07:13.037 [2024-12-09 13:14:15.060734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013126307101 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.060763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.037 [2024-12-09 13:14:15.060801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.060818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.037 #27 NEW cov: 12467 ft: 14266 corp: 8/461b lim: 120 exec/s: 0 rss: 72Mb L: 56/87 MS: 1 ChangeBit- 00:07:13.037 [2024-12-09 13:14:15.120927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.120956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.037 [2024-12-09 13:14:15.120992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.121008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.037 #28 NEW cov: 12467 ft: 14372 corp: 9/527b lim: 120 exec/s: 0 rss: 72Mb L: 66/87 MS: 1 ShuffleBytes- 00:07:13.037 [2024-12-09 13:14:15.181058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013126307101 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.181085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.037 [2024-12-09 13:14:15.181124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.181140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.037 #29 NEW cov: 12467 ft: 14425 corp: 10/583b lim: 120 exec/s: 0 rss: 72Mb L: 56/87 MS: 1 ChangeBinInt- 00:07:13.037 [2024-12-09 13:14:15.221346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.221373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.037 [2024-12-09 13:14:15.221420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.221436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.037 [2024-12-09 13:14:15.221491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:52943 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.037 [2024-12-09 13:14:15.221507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.037 #30 NEW cov: 12467 ft: 14468 corp: 11/665b lim: 120 exec/s: 0 rss: 73Mb L: 82/87 MS: 1 EraseBytes- 00:07:13.299 [2024-12-09 13:14:15.281538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.281566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.281620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.281637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.281694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.281709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.299 #31 NEW cov: 12467 ft: 14484 corp: 12/739b lim: 120 exec/s: 0 rss: 73Mb L: 74/87 MS: 1 EraseBytes- 00:07:13.299 [2024-12-09 13:14:15.321599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.321627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.321694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.321711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.321767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.321782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.299 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:13.299 #32 NEW cov: 12490 ft: 14548 corp: 13/826b lim: 120 exec/s: 0 rss: 73Mb L: 87/87 MS: 1 ChangeBinInt- 00:07:13.299 [2024-12-09 13:14:15.361597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.361625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.361680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.361700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 #33 NEW cov: 12490 ft: 14630 corp: 14/892b lim: 120 exec/s: 0 rss: 73Mb L: 66/87 MS: 1 ShuffleBytes- 00:07:13.299 [2024-12-09 13:14:15.421759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.421786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.421843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.421859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 #34 NEW cov: 12490 ft: 14636 corp: 15/957b lim: 120 exec/s: 34 rss: 73Mb L: 65/87 MS: 1 ChangeByte- 00:07:13.299 [2024-12-09 13:14:15.462153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.462179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.462246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.462264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.462320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.462334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.462390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902074895974190798 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.462406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.299 #35 NEW cov: 12490 ft: 15028 corp: 16/1059b lim: 120 exec/s: 35 rss: 73Mb L: 102/102 MS: 1 CrossOver- 00:07:13.299 [2024-12-09 13:14:15.502138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.502165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.502212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.502228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.299 [2024-12-09 13:14:15.502286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.299 [2024-12-09 13:14:15.502302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.560 #36 NEW cov: 12490 ft: 15064 corp: 17/1146b lim: 120 exec/s: 36 rss: 73Mb L: 87/102 MS: 1 ChangeASCIIInt- 00:07:13.560 [2024-12-09 13:14:15.562298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.562326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.562388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.562404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.562463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.562480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.560 #37 NEW cov: 12490 ft: 15132 corp: 18/1233b lim: 120 exec/s: 37 rss: 73Mb L: 87/102 MS: 1 ShuffleBytes- 00:07:13.560 [2024-12-09 13:14:15.622303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.622330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.622385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.622401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.560 #38 NEW cov: 12490 ft: 15143 corp: 19/1290b lim: 120 exec/s: 38 rss: 73Mb L: 57/102 MS: 1 EraseBytes- 00:07:13.560 [2024-12-09 13:14:15.662437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.662464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.662514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369971278 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.662531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.560 #39 NEW cov: 12490 ft: 15165 corp: 20/1356b lim: 120 exec/s: 39 rss: 73Mb L: 66/102 MS: 1 InsertByte- 00:07:13.560 [2024-12-09 13:14:15.722915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5652141160002375246 len:28785 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.722943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.723011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8092491679232192624 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.723027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.723082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.723097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.723151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5642533481369980494 len:52943 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.723168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.560 #40 NEW cov: 12490 ft: 15177 corp: 21/1462b lim: 120 exec/s: 40 rss: 73Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:07:13.560 [2024-12-09 13:14:15.783084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.783111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.783174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.783191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.783247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.783262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.560 [2024-12-09 13:14:15.783317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902074895974190798 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.560 [2024-12-09 13:14:15.783333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.821 #41 NEW cov: 12490 ft: 15190 corp: 22/1564b lim: 120 exec/s: 41 rss: 73Mb L: 102/106 MS: 1 ChangeBinInt- 00:07:13.821 [2024-12-09 13:14:15.843211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.843238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.843290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.843307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.843361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.843375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.843428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.843444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.821 #42 NEW cov: 12490 ft: 15210 corp: 23/1667b lim: 120 exec/s: 42 rss: 73Mb L: 103/106 MS: 1 InsertByte- 00:07:13.821 [2024-12-09 13:14:15.883338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.883366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.883419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.883435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.883506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.883523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.883579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.883598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.821 #43 NEW cov: 12490 ft: 15220 corp: 24/1770b lim: 120 exec/s: 43 rss: 73Mb L: 103/106 MS: 1 InsertByte- 00:07:13.821 [2024-12-09 13:14:15.923506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.923534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.923594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.923610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.923664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.923680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.923743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902075604644384462 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.923758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.821 #44 NEW cov: 12490 ft: 15238 corp: 25/1873b lim: 120 exec/s: 44 rss: 73Mb L: 103/106 MS: 1 ChangeBinInt- 00:07:13.821 [2024-12-09 13:14:15.983621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.983649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.983717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.983734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.983788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20014 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.983804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:15.983857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14902074895974190798 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:15.983871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.821 #45 NEW cov: 12490 ft: 15277 corp: 26/1975b lim: 120 exec/s: 45 rss: 73Mb L: 102/106 MS: 1 ChangeByte- 00:07:13.821 [2024-12-09 13:14:16.043676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:16.043703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:16.043768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:16.043785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.821 [2024-12-09 13:14:16.043841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.821 [2024-12-09 13:14:16.043857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.082 #46 NEW cov: 12490 ft: 15279 corp: 27/2063b lim: 120 exec/s: 46 rss: 74Mb L: 88/106 MS: 1 InsertByte- 00:07:14.082 [2024-12-09 13:14:16.103800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.103828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.082 [2024-12-09 13:14:16.103869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.103885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.082 [2024-12-09 13:14:16.103941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.103956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.082 #47 NEW cov: 12490 ft: 15286 corp: 28/2150b lim: 120 exec/s: 47 rss: 74Mb L: 87/106 MS: 1 ChangeBit- 00:07:14.082 [2024-12-09 13:14:16.143833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.143862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.082 [2024-12-09 13:14:16.143904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.143921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.082 #48 NEW cov: 12490 ft: 15305 corp: 29/2215b lim: 120 exec/s: 48 rss: 74Mb L: 65/106 MS: 1 CMP- DE: "\000\000\012=\213\355\245F"- 00:07:14.082 [2024-12-09 13:14:16.184069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.184096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.082 [2024-12-09 13:14:16.184132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.184148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.082 [2024-12-09 13:14:16.184204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.184220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.082 #49 NEW cov: 12490 ft: 15386 corp: 30/2302b lim: 120 exec/s: 49 rss: 74Mb L: 87/106 MS: 1 ChangeASCIIInt- 00:07:14.082 [2024-12-09 13:14:16.224319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.082 [2024-12-09 13:14:16.224346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.224410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744070739984383 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.224428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.224485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5643659381276823118 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.224504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.224561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5642533481367817806 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.224578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.083 #50 NEW cov: 12490 ft: 15431 corp: 31/2418b lim: 120 exec/s: 50 rss: 74Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:07:14.083 [2024-12-09 13:14:16.284323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.284351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.284390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.284406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.284463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:52943 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.284479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.083 #51 NEW cov: 12490 ft: 15440 corp: 32/2500b lim: 120 exec/s: 51 rss: 74Mb L: 82/116 MS: 1 ChangeBit- 00:07:14.083 [2024-12-09 13:14:16.324618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.324647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.324700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.324716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.324773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.324789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.083 [2024-12-09 13:14:16.324847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.083 [2024-12-09 13:14:16.324862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.344 #52 NEW cov: 12490 ft: 15446 corp: 33/2607b lim: 120 exec/s: 52 rss: 74Mb L: 107/116 MS: 1 CopyPart- 00:07:14.344 [2024-12-09 13:14:16.364746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.344 [2024-12-09 13:14:16.364774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.344 [2024-12-09 13:14:16.364823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.364840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.364897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.364912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.364969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:335007449088 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.364984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.345 #53 NEW cov: 12490 ft: 15464 corp: 34/2726b lim: 120 exec/s: 53 rss: 74Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:07:14.345 [2024-12-09 13:14:16.405030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.405058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.405111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5642533481369980494 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.405127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.405196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.405213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.405267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.405282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.345 [2024-12-09 13:14:16.405341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:14902074895974190798 len:12594 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.345 [2024-12-09 13:14:16.405355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.345 #54 NEW cov: 12490 ft: 15496 corp: 35/2846b lim: 120 exec/s: 27 rss: 74Mb L: 120/120 MS: 1 CrossOver- 00:07:14.345 #54 DONE cov: 12490 ft: 15496 corp: 35/2846b lim: 120 exec/s: 27 rss: 74Mb 00:07:14.345 ###### Recommended dictionary. ###### 00:07:14.345 "\000\000\012=\213\355\245F" # Uses: 0 00:07:14.345 ###### End of recommended dictionary. ###### 00:07:14.345 Done 54 runs in 2 second(s) 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.345 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:14.606 [2024-12-09 13:14:16.597230] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:14.606 [2024-12-09 13:14:16.597298] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302469 ] 00:07:14.606 [2024-12-09 13:14:16.803663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.606 [2024-12-09 13:14:16.838647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.866 [2024-12-09 13:14:16.897744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.866 [2024-12-09 13:14:16.914028] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:14.866 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.866 INFO: Seed: 2145998458 00:07:14.866 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:14.866 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:14.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:14.866 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.866 #2 INITED exec/s: 0 rss: 65Mb 00:07:14.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.866 This may also happen if the target rejected all inputs we tried so far 00:07:14.866 [2024-12-09 13:14:16.979302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.866 [2024-12-09 13:14:16.979331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.126 NEW_FUNC[1/716]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:15.126 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.126 #7 NEW cov: 12188 ft: 12189 corp: 2/40b lim: 100 exec/s: 0 rss: 72Mb L: 39/39 MS: 5 InsertRepeatedBytes-ChangeBit-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:15.126 [2024-12-09 13:14:17.310438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.126 [2024-12-09 13:14:17.310505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.126 #8 NEW cov: 12318 ft: 12994 corp: 3/79b lim: 100 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:15.386 [2024-12-09 13:14:17.380550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.386 [2024-12-09 13:14:17.380577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.386 [2024-12-09 13:14:17.380632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.386 [2024-12-09 13:14:17.380648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.386 [2024-12-09 13:14:17.380708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.386 [2024-12-09 13:14:17.380724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.386 #13 NEW cov: 12324 ft: 13545 corp: 4/148b lim: 100 exec/s: 0 rss: 72Mb L: 69/69 MS: 5 ChangeBit-ShuffleBytes-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:07:15.386 [2024-12-09 13:14:17.420421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.386 [2024-12-09 13:14:17.420448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.386 #14 NEW cov: 12409 ft: 13844 corp: 5/187b lim: 100 exec/s: 0 rss: 72Mb L: 39/69 MS: 1 ShuffleBytes- 00:07:15.386 [2024-12-09 13:14:17.460512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.386 [2024-12-09 13:14:17.460538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.386 #15 NEW cov: 12409 ft: 14094 corp: 6/226b lim: 100 exec/s: 0 rss: 72Mb L: 39/69 MS: 1 ChangeByte- 00:07:15.386 [2024-12-09 13:14:17.500638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.386 [2024-12-09 13:14:17.500664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.387 #16 NEW cov: 12409 ft: 14267 corp: 7/265b lim: 100 exec/s: 0 rss: 72Mb L: 39/69 MS: 1 ChangeBit- 00:07:15.387 [2024-12-09 13:14:17.561050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.387 [2024-12-09 13:14:17.561077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.387 [2024-12-09 13:14:17.561128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.387 [2024-12-09 13:14:17.561143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.387 [2024-12-09 13:14:17.561199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.387 [2024-12-09 13:14:17.561213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.387 #17 NEW cov: 12409 ft: 14324 corp: 8/335b lim: 100 exec/s: 0 rss: 72Mb L: 70/70 MS: 1 InsertByte- 00:07:15.387 [2024-12-09 13:14:17.621347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.387 [2024-12-09 13:14:17.621372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.387 [2024-12-09 13:14:17.621428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.387 [2024-12-09 13:14:17.621443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.387 [2024-12-09 13:14:17.621499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.387 [2024-12-09 13:14:17.621513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.387 [2024-12-09 13:14:17.621570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.387 [2024-12-09 13:14:17.621590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.647 #18 NEW cov: 12409 ft: 14601 corp: 9/419b lim: 100 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:15.647 [2024-12-09 13:14:17.661443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.647 [2024-12-09 13:14:17.661469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.661515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.647 [2024-12-09 13:14:17.661531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.661590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.647 [2024-12-09 13:14:17.661605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.661661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.647 [2024-12-09 13:14:17.661675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.647 #19 NEW cov: 12409 ft: 14629 corp: 10/503b lim: 100 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 CrossOver- 00:07:15.647 [2024-12-09 13:14:17.721526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.647 [2024-12-09 13:14:17.721552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.721594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.647 [2024-12-09 13:14:17.721609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.721681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.647 [2024-12-09 13:14:17.721696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.647 #20 NEW cov: 12409 ft: 14708 corp: 11/573b lim: 100 exec/s: 0 rss: 72Mb L: 70/84 MS: 1 ChangeBit- 00:07:15.647 [2024-12-09 13:14:17.781551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.647 [2024-12-09 13:14:17.781577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.781631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.647 [2024-12-09 13:14:17.781646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.647 #21 NEW cov: 12409 ft: 15005 corp: 12/616b lim: 100 exec/s: 0 rss: 73Mb L: 43/84 MS: 1 CrossOver- 00:07:15.647 [2024-12-09 13:14:17.841963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.647 [2024-12-09 13:14:17.841990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.842038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.647 [2024-12-09 13:14:17.842053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.842125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.647 [2024-12-09 13:14:17.842141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.647 [2024-12-09 13:14:17.842199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.647 [2024-12-09 13:14:17.842214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.647 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:15.648 #22 NEW cov: 12432 ft: 15045 corp: 13/702b lim: 100 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:07:15.648 [2024-12-09 13:14:17.881715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.648 [2024-12-09 13:14:17.881745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 #23 NEW cov: 12432 ft: 15048 corp: 14/741b lim: 100 exec/s: 0 rss: 73Mb L: 39/86 MS: 1 ChangeBit- 00:07:15.908 [2024-12-09 13:14:17.921792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.908 [2024-12-09 13:14:17.921819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 #24 NEW cov: 12432 ft: 15093 corp: 15/780b lim: 100 exec/s: 0 rss: 73Mb L: 39/86 MS: 1 CopyPart- 00:07:15.908 [2024-12-09 13:14:17.962281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.908 [2024-12-09 13:14:17.962306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:17.962355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.908 [2024-12-09 13:14:17.962371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:17.962427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.908 [2024-12-09 13:14:17.962442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:17.962498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.908 [2024-12-09 13:14:17.962514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.908 #25 NEW cov: 12432 ft: 15132 corp: 16/864b lim: 100 exec/s: 25 rss: 73Mb L: 84/86 MS: 1 ChangeBinInt- 00:07:15.908 [2024-12-09 13:14:18.022478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.908 [2024-12-09 13:14:18.022504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.022554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.908 [2024-12-09 13:14:18.022569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.022630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.908 [2024-12-09 13:14:18.022645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.022701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.908 [2024-12-09 13:14:18.022716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.908 #26 NEW cov: 12432 ft: 15157 corp: 17/948b lim: 100 exec/s: 26 rss: 73Mb L: 84/86 MS: 1 ShuffleBytes- 00:07:15.908 [2024-12-09 13:14:18.062554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.908 [2024-12-09 13:14:18.062579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.062644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.908 [2024-12-09 13:14:18.062658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.062715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.908 [2024-12-09 13:14:18.062731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.062791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.908 [2024-12-09 13:14:18.062806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.908 #27 NEW cov: 12432 ft: 15198 corp: 18/1047b lim: 100 exec/s: 27 rss: 73Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:15.908 [2024-12-09 13:14:18.102552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.908 [2024-12-09 13:14:18.102578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.908 [2024-12-09 13:14:18.102622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.908 [2024-12-09 13:14:18.102637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.909 [2024-12-09 13:14:18.102693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.909 [2024-12-09 13:14:18.102724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.909 #28 NEW cov: 12432 ft: 15237 corp: 19/1116b lim: 100 exec/s: 28 rss: 73Mb L: 69/99 MS: 1 ChangeBit- 00:07:15.909 [2024-12-09 13:14:18.142458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.909 [2024-12-09 13:14:18.142485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 #29 NEW cov: 12432 ft: 15276 corp: 20/1155b lim: 100 exec/s: 29 rss: 73Mb L: 39/99 MS: 1 CMP- DE: "\007\335.\265>\012\000\000"- 00:07:16.169 [2024-12-09 13:14:18.182804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.182830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.182879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.169 [2024-12-09 13:14:18.182894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.182950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.169 [2024-12-09 13:14:18.182964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.169 #30 NEW cov: 12432 ft: 15331 corp: 21/1218b lim: 100 exec/s: 30 rss: 73Mb L: 63/99 MS: 1 InsertRepeatedBytes- 00:07:16.169 [2024-12-09 13:14:18.222893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.222918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.222956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.169 [2024-12-09 13:14:18.222970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.223027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.169 [2024-12-09 13:14:18.223043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.169 #31 NEW cov: 12432 ft: 15397 corp: 22/1293b lim: 100 exec/s: 31 rss: 73Mb L: 75/99 MS: 1 CopyPart- 00:07:16.169 [2024-12-09 13:14:18.283219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.283245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.283295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.169 [2024-12-09 13:14:18.283312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.283368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.169 [2024-12-09 13:14:18.283383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.283443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.169 [2024-12-09 13:14:18.283457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.169 #32 NEW cov: 12432 ft: 15424 corp: 23/1377b lim: 100 exec/s: 32 rss: 73Mb L: 84/99 MS: 1 ChangeBinInt- 00:07:16.169 [2024-12-09 13:14:18.322962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.322988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 #33 NEW cov: 12432 ft: 15436 corp: 24/1401b lim: 100 exec/s: 33 rss: 73Mb L: 24/99 MS: 1 EraseBytes- 00:07:16.169 [2024-12-09 13:14:18.363399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.363425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.363482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.169 [2024-12-09 13:14:18.363494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.363563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.169 [2024-12-09 13:14:18.363578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.363644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.169 [2024-12-09 13:14:18.363659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.169 #34 NEW cov: 12432 ft: 15445 corp: 25/1485b lim: 100 exec/s: 34 rss: 73Mb L: 84/99 MS: 1 ChangeByte- 00:07:16.169 [2024-12-09 13:14:18.403525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.169 [2024-12-09 13:14:18.403551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.403612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.169 [2024-12-09 13:14:18.403626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.403682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.169 [2024-12-09 13:14:18.403698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.169 [2024-12-09 13:14:18.403754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.169 [2024-12-09 13:14:18.403769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.430 #35 NEW cov: 12432 ft: 15466 corp: 26/1567b lim: 100 exec/s: 35 rss: 73Mb L: 82/99 MS: 1 EraseBytes- 00:07:16.430 [2024-12-09 13:14:18.463600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.430 [2024-12-09 13:14:18.463626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.463666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.430 [2024-12-09 13:14:18.463680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.463737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.430 [2024-12-09 13:14:18.463751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.430 #36 NEW cov: 12432 ft: 15523 corp: 27/1643b lim: 100 exec/s: 36 rss: 73Mb L: 76/99 MS: 1 InsertByte- 00:07:16.430 [2024-12-09 13:14:18.523868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.430 [2024-12-09 13:14:18.523893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.523945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.430 [2024-12-09 13:14:18.523961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.524016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.430 [2024-12-09 13:14:18.524029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.524088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.430 [2024-12-09 13:14:18.524103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.430 #37 NEW cov: 12432 ft: 15552 corp: 28/1742b lim: 100 exec/s: 37 rss: 73Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:16.430 [2024-12-09 13:14:18.563724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.430 [2024-12-09 13:14:18.563750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.563789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.430 [2024-12-09 13:14:18.563803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.430 #38 NEW cov: 12432 ft: 15558 corp: 29/1799b lim: 100 exec/s: 38 rss: 73Mb L: 57/99 MS: 1 InsertRepeatedBytes- 00:07:16.430 [2024-12-09 13:14:18.603971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.430 [2024-12-09 13:14:18.603997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.604035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.430 [2024-12-09 13:14:18.604050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.430 [2024-12-09 13:14:18.604106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.430 [2024-12-09 13:14:18.604120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.430 #39 NEW cov: 12432 ft: 15581 corp: 30/1859b lim: 100 exec/s: 39 rss: 73Mb L: 60/99 MS: 1 InsertRepeatedBytes- 00:07:16.430 [2024-12-09 13:14:18.644048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.430 [2024-12-09 13:14:18.644074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.431 [2024-12-09 13:14:18.644125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.431 [2024-12-09 13:14:18.644141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.431 [2024-12-09 13:14:18.644217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.431 [2024-12-09 13:14:18.644232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.431 #40 NEW cov: 12432 ft: 15608 corp: 31/1936b lim: 100 exec/s: 40 rss: 73Mb L: 77/99 MS: 1 PersAutoDict- DE: "\007\335.\265>\012\000\000"- 00:07:16.691 [2024-12-09 13:14:18.684321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.691 [2024-12-09 13:14:18.684347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.684401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.691 [2024-12-09 13:14:18.684415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.684470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.691 [2024-12-09 13:14:18.684485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.684540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.691 [2024-12-09 13:14:18.684554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.691 #41 NEW cov: 12432 ft: 15611 corp: 32/2020b lim: 100 exec/s: 41 rss: 73Mb L: 84/99 MS: 1 ChangeBinInt- 00:07:16.691 [2024-12-09 13:14:18.744547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.691 [2024-12-09 13:14:18.744572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.744632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.691 [2024-12-09 13:14:18.744648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.744704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.691 [2024-12-09 13:14:18.744718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.744776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.691 [2024-12-09 13:14:18.744790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.691 #42 NEW cov: 12432 ft: 15626 corp: 33/2114b lim: 100 exec/s: 42 rss: 73Mb L: 94/99 MS: 1 CopyPart- 00:07:16.691 [2024-12-09 13:14:18.784611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.691 [2024-12-09 13:14:18.784638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.784692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.691 [2024-12-09 13:14:18.784706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.784761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.691 [2024-12-09 13:14:18.784776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.784829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.691 [2024-12-09 13:14:18.784844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.691 #43 NEW cov: 12432 ft: 15649 corp: 34/2207b lim: 100 exec/s: 43 rss: 73Mb L: 93/99 MS: 1 CopyPart- 00:07:16.691 [2024-12-09 13:14:18.844861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.691 [2024-12-09 13:14:18.844888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.844947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.691 [2024-12-09 13:14:18.844961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.845017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.691 [2024-12-09 13:14:18.845032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.845087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.691 [2024-12-09 13:14:18.845102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.845156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:16.691 [2024-12-09 13:14:18.845170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.691 #44 NEW cov: 12432 ft: 15686 corp: 35/2307b lim: 100 exec/s: 44 rss: 73Mb L: 100/100 MS: 1 InsertByte- 00:07:16.691 [2024-12-09 13:14:18.905043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.691 [2024-12-09 13:14:18.905068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.905127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.691 [2024-12-09 13:14:18.905141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.905214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.691 [2024-12-09 13:14:18.905230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.905288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.691 [2024-12-09 13:14:18.905303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.691 [2024-12-09 13:14:18.905360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:16.691 [2024-12-09 13:14:18.905376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.952 #45 NEW cov: 12432 ft: 15696 corp: 36/2407b lim: 100 exec/s: 45 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:07:16.952 [2024-12-09 13:14:18.965114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.952 [2024-12-09 13:14:18.965138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.952 [2024-12-09 13:14:18.965208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.952 [2024-12-09 13:14:18.965224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.952 [2024-12-09 13:14:18.965280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.952 [2024-12-09 13:14:18.965293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.952 [2024-12-09 13:14:18.965357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.952 [2024-12-09 13:14:18.965373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.952 #46 NEW cov: 12432 ft: 15769 corp: 37/2491b lim: 100 exec/s: 23 rss: 74Mb L: 84/100 MS: 1 ChangeBit- 00:07:16.952 #46 DONE cov: 12432 ft: 15769 corp: 37/2491b lim: 100 exec/s: 23 rss: 74Mb 00:07:16.952 ###### Recommended dictionary. ###### 00:07:16.952 "\007\335.\265>\012\000\000" # Uses: 1 00:07:16.952 ###### End of recommended dictionary. ###### 00:07:16.952 Done 46 runs in 2 second(s) 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.952 13:14:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:16.952 [2024-12-09 13:14:19.157093] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:16.952 [2024-12-09 13:14:19.157161] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302885 ] 00:07:17.213 [2024-12-09 13:14:19.353320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.213 [2024-12-09 13:14:19.386038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.213 [2024-12-09 13:14:19.444915] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.473 [2024-12-09 13:14:19.461250] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:17.473 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.473 INFO: Seed: 398042989 00:07:17.473 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:17.473 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:17.473 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:17.473 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.473 #2 INITED exec/s: 0 rss: 66Mb 00:07:17.473 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.473 This may also happen if the target rejected all inputs we tried so far 00:07:17.473 [2024-12-09 13:14:19.526488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:17.473 [2024-12-09 13:14:19.526519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.473 [2024-12-09 13:14:19.526572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.473 [2024-12-09 13:14:19.526592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.734 NEW_FUNC[1/716]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:17.734 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.734 #4 NEW cov: 12162 ft: 12163 corp: 2/30b lim: 50 exec/s: 0 rss: 72Mb L: 29/29 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:17.734 [2024-12-09 13:14:19.857484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:17.734 [2024-12-09 13:14:19.857540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.734 [2024-12-09 13:14:19.857626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.734 [2024-12-09 13:14:19.857656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.734 #5 NEW cov: 12296 ft: 12819 corp: 3/59b lim: 50 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 ShuffleBytes- 00:07:17.734 [2024-12-09 13:14:19.917416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:17.734 [2024-12-09 13:14:19.917444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.734 [2024-12-09 13:14:19.917483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.734 [2024-12-09 13:14:19.917499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.734 #6 NEW cov: 12302 ft: 13159 corp: 4/88b lim: 50 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 ShuffleBytes- 00:07:17.734 [2024-12-09 13:14:19.977552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814792884554240 len:1 00:07:17.734 [2024-12-09 13:14:19.977580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.994 #9 NEW cov: 12387 ft: 13756 corp: 5/98b lim: 50 exec/s: 0 rss: 72Mb L: 10/29 MS: 3 CMP-CopyPart-CopyPart- DE: "\000\002\000\000"- 00:07:17.994 [2024-12-09 13:14:20.017695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:17.994 [2024-12-09 13:14:20.017722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.994 [2024-12-09 13:14:20.017773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.994 [2024-12-09 13:14:20.017787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.994 #10 NEW cov: 12387 ft: 13859 corp: 6/127b lim: 50 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:07:17.994 [2024-12-09 13:14:20.077873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:17.994 [2024-12-09 13:14:20.077903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.994 [2024-12-09 13:14:20.077971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.994 [2024-12-09 13:14:20.077988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.994 #11 NEW cov: 12387 ft: 13944 corp: 7/156b lim: 50 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:07:17.994 [2024-12-09 13:14:20.138140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229783340850155793 len:4370 00:07:17.994 [2024-12-09 13:14:20.138167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.994 [2024-12-09 13:14:20.138200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:17.994 [2024-12-09 13:14:20.138214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.994 [2024-12-09 13:14:20.138265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:17.994 [2024-12-09 13:14:20.138296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.994 #12 NEW cov: 12387 ft: 14272 corp: 8/186b lim: 50 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertByte- 00:07:17.994 [2024-12-09 13:14:20.178042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814792884554240 len:1 00:07:17.995 [2024-12-09 13:14:20.178070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.995 #13 NEW cov: 12387 ft: 14359 corp: 9/196b lim: 50 exec/s: 0 rss: 73Mb L: 10/30 MS: 1 CopyPart- 00:07:17.995 [2024-12-09 13:14:20.238243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4608 00:07:17.995 [2024-12-09 13:14:20.238271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 #15 NEW cov: 12387 ft: 14450 corp: 10/208b lim: 50 exec/s: 0 rss: 73Mb L: 12/30 MS: 2 CMP-CrossOver- DE: "\377\377"- 00:07:18.255 [2024-12-09 13:14:20.278514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229783340850155793 len:4370 00:07:18.255 [2024-12-09 13:14:20.278541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 [2024-12-09 13:14:20.278591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.255 [2024-12-09 13:14:20.278608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.255 [2024-12-09 13:14:20.278660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:18.255 [2024-12-09 13:14:20.278674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.255 #16 NEW cov: 12387 ft: 14482 corp: 11/238b lim: 50 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:07:18.255 [2024-12-09 13:14:20.338516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995284075786 len:2561 00:07:18.255 [2024-12-09 13:14:20.338543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 #22 NEW cov: 12387 ft: 14522 corp: 12/249b lim: 50 exec/s: 0 rss: 73Mb L: 11/30 MS: 1 InsertByte- 00:07:18.255 [2024-12-09 13:14:20.398808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.255 [2024-12-09 13:14:20.398835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 [2024-12-09 13:14:20.398888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.255 [2024-12-09 13:14:20.398904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.255 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:18.255 #23 NEW cov: 12410 ft: 14572 corp: 13/278b lim: 50 exec/s: 0 rss: 73Mb L: 29/30 MS: 1 CopyPart- 00:07:18.255 [2024-12-09 13:14:20.438781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814792884554240 len:1 00:07:18.255 [2024-12-09 13:14:20.438807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 #24 NEW cov: 12410 ft: 14604 corp: 14/288b lim: 50 exec/s: 0 rss: 73Mb L: 10/30 MS: 1 ChangeBinInt- 00:07:18.255 [2024-12-09 13:14:20.479092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4438 00:07:18.255 [2024-12-09 13:14:20.479118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.255 [2024-12-09 13:14:20.479161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.255 [2024-12-09 13:14:20.479177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.255 [2024-12-09 13:14:20.479227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:18.255 [2024-12-09 13:14:20.479243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.516 #25 NEW cov: 12410 ft: 14630 corp: 15/318b lim: 50 exec/s: 25 rss: 73Mb L: 30/30 MS: 1 InsertByte- 00:07:18.516 [2024-12-09 13:14:20.539117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11038233733386 len:1 00:07:18.516 [2024-12-09 13:14:20.539144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.516 #27 NEW cov: 12410 ft: 14658 corp: 16/328b lim: 50 exec/s: 27 rss: 73Mb L: 10/30 MS: 2 EraseBytes-InsertByte- 00:07:18.516 [2024-12-09 13:14:20.599363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.516 [2024-12-09 13:14:20.599390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.516 [2024-12-09 13:14:20.599425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.516 [2024-12-09 13:14:20.599441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.516 #28 NEW cov: 12410 ft: 14749 corp: 17/357b lim: 50 exec/s: 28 rss: 73Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:18.516 [2024-12-09 13:14:20.639331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10997431559434 len:2561 00:07:18.516 [2024-12-09 13:14:20.639356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.516 #29 NEW cov: 12410 ft: 14784 corp: 18/368b lim: 50 exec/s: 29 rss: 73Mb L: 11/30 MS: 1 ChangeBit- 00:07:18.516 [2024-12-09 13:14:20.699747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.516 [2024-12-09 13:14:20.699778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.516 [2024-12-09 13:14:20.699812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938246316049 len:4370 00:07:18.516 [2024-12-09 13:14:20.699828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.516 [2024-12-09 13:14:20.699880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:18.516 [2024-12-09 13:14:20.699897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.516 #30 NEW cov: 12410 ft: 14791 corp: 19/399b lim: 50 exec/s: 30 rss: 73Mb L: 31/31 MS: 1 CMP- DE: "\002\000"- 00:07:18.516 [2024-12-09 13:14:20.739628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814792884551690 len:1 00:07:18.516 [2024-12-09 13:14:20.739655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.776 #31 NEW cov: 12410 ft: 14816 corp: 20/409b lim: 50 exec/s: 31 rss: 73Mb L: 10/31 MS: 1 ShuffleBytes- 00:07:18.777 [2024-12-09 13:14:20.779826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.777 [2024-12-09 13:14:20.779854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.779905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.777 [2024-12-09 13:14:20.779922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.777 #32 NEW cov: 12410 ft: 14844 corp: 21/437b lim: 50 exec/s: 32 rss: 73Mb L: 28/31 MS: 1 EraseBytes- 00:07:18.777 [2024-12-09 13:14:20.819923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.777 [2024-12-09 13:14:20.819950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.819998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.777 [2024-12-09 13:14:20.820014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.777 #33 NEW cov: 12410 ft: 14885 corp: 22/466b lim: 50 exec/s: 33 rss: 73Mb L: 29/31 MS: 1 ChangeBinInt- 00:07:18.777 [2024-12-09 13:14:20.859954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995284052480 len:2561 00:07:18.777 [2024-12-09 13:14:20.859982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 #34 NEW cov: 12410 ft: 14901 corp: 23/476b lim: 50 exec/s: 34 rss: 73Mb L: 10/31 MS: 1 ShuffleBytes- 00:07:18.777 [2024-12-09 13:14:20.900132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782911353426193 len:11 00:07:18.777 [2024-12-09 13:14:20.900159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.900203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782937968185617 len:4370 00:07:18.777 [2024-12-09 13:14:20.900218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.777 #35 NEW cov: 12410 ft: 14906 corp: 24/499b lim: 50 exec/s: 35 rss: 74Mb L: 23/31 MS: 1 CrossOver- 00:07:18.777 [2024-12-09 13:14:20.960538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229783340850155793 len:4370 00:07:18.777 [2024-12-09 13:14:20.960565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.960609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.777 [2024-12-09 13:14:20.960624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.960675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:18.777 [2024-12-09 13:14:20.960706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:20.960757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7931139183774601489 len:4370 00:07:18.777 [2024-12-09 13:14:20.960774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.777 #36 NEW cov: 12410 ft: 15202 corp: 25/547b lim: 50 exec/s: 36 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:07:18.777 [2024-12-09 13:14:21.000531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:18.777 [2024-12-09 13:14:21.000558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:21.000604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:18.777 [2024-12-09 13:14:21.000620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.777 [2024-12-09 13:14:21.000670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247323153 len:4370 00:07:18.777 [2024-12-09 13:14:21.000687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.037 #37 NEW cov: 12410 ft: 15219 corp: 26/577b lim: 50 exec/s: 37 rss: 74Mb L: 30/48 MS: 1 InsertByte- 00:07:19.037 [2024-12-09 13:14:21.040681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:19.037 [2024-12-09 13:14:21.040709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.037 [2024-12-09 13:14:21.040743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:19.037 [2024-12-09 13:14:21.040758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.037 [2024-12-09 13:14:21.040808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:46775 00:07:19.037 [2024-12-09 13:14:21.040823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.037 #38 NEW cov: 12410 ft: 15231 corp: 27/611b lim: 50 exec/s: 38 rss: 74Mb L: 34/48 MS: 1 InsertRepeatedBytes- 00:07:19.037 [2024-12-09 13:14:21.080668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229781841906569489 len:4370 00:07:19.037 [2024-12-09 13:14:21.080695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.037 [2024-12-09 13:14:21.080730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:19.037 [2024-12-09 13:14:21.080747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.037 #39 NEW cov: 12410 ft: 15241 corp: 28/640b lim: 50 exec/s: 39 rss: 74Mb L: 29/48 MS: 1 ChangeBinInt- 00:07:19.037 [2024-12-09 13:14:21.140931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782937994531089 len:4370 00:07:19.038 [2024-12-09 13:14:21.140958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.038 [2024-12-09 13:14:21.140999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938246316049 len:4370 00:07:19.038 [2024-12-09 13:14:21.141015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.038 [2024-12-09 13:14:21.141066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:19.038 [2024-12-09 13:14:21.141082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.038 #40 NEW cov: 12410 ft: 15249 corp: 29/671b lim: 50 exec/s: 40 rss: 74Mb L: 31/48 MS: 1 PersAutoDict- DE: "\002\000"- 00:07:19.038 [2024-12-09 13:14:21.200998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:19.038 [2024-12-09 13:14:21.201025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.038 [2024-12-09 13:14:21.201061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:19.038 [2024-12-09 13:14:21.201077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.038 #41 NEW cov: 12410 ft: 15253 corp: 30/695b lim: 50 exec/s: 41 rss: 74Mb L: 24/48 MS: 1 EraseBytes- 00:07:19.038 [2024-12-09 13:14:21.261054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10995284075786 len:2561 00:07:19.038 [2024-12-09 13:14:21.261082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.298 #42 NEW cov: 12410 ft: 15263 corp: 31/706b lim: 50 exec/s: 42 rss: 74Mb L: 11/48 MS: 1 ShuffleBytes- 00:07:19.298 [2024-12-09 13:14:21.301440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229885195999580433 len:4370 00:07:19.298 [2024-12-09 13:14:21.301467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.301512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:07:19.298 [2024-12-09 13:14:21.301528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.301579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:19.298 [2024-12-09 13:14:21.301600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.301651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7931139183774601489 len:4370 00:07:19.298 [2024-12-09 13:14:21.301666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.298 #43 NEW cov: 12410 ft: 15279 corp: 32/754b lim: 50 exec/s: 43 rss: 74Mb L: 48/48 MS: 1 ShuffleBytes- 00:07:19.298 [2024-12-09 13:14:21.361571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782941418197265 len:4370 00:07:19.298 [2024-12-09 13:14:21.361603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.361659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1238790137502044433 len:4370 00:07:19.298 [2024-12-09 13:14:21.361674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.361726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247323153 len:4370 00:07:19.298 [2024-12-09 13:14:21.361748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.298 #44 NEW cov: 12410 ft: 15291 corp: 33/784b lim: 50 exec/s: 44 rss: 74Mb L: 30/48 MS: 1 ChangeBit- 00:07:19.298 [2024-12-09 13:14:21.421519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782908182532369 len:1 00:07:19.298 [2024-12-09 13:14:21.421545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.298 #50 NEW cov: 12410 ft: 15297 corp: 34/794b lim: 50 exec/s: 50 rss: 74Mb L: 10/48 MS: 1 CrossOver- 00:07:19.298 [2024-12-09 13:14:21.481980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229885195999580433 len:4370 00:07:19.298 [2024-12-09 13:14:21.482006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.482053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247327249 len:4370 00:07:19.298 [2024-12-09 13:14:21.482069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.482119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:07:19.298 [2024-12-09 13:14:21.482134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.298 [2024-12-09 13:14:21.482184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7931139183774601489 len:4370 00:07:19.298 [2024-12-09 13:14:21.482200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.298 #51 NEW cov: 12410 ft: 15324 corp: 35/842b lim: 50 exec/s: 25 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:07:19.298 #51 DONE cov: 12410 ft: 15324 corp: 35/842b lim: 50 exec/s: 25 rss: 74Mb 00:07:19.298 ###### Recommended dictionary. ###### 00:07:19.298 "\000\002\000\000" # Uses: 0 00:07:19.298 "\377\377" # Uses: 0 00:07:19.298 "\002\000" # Uses: 1 00:07:19.298 ###### End of recommended dictionary. ###### 00:07:19.298 Done 51 runs in 2 second(s) 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.558 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:19.558 [2024-12-09 13:14:21.674848] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:19.559 [2024-12-09 13:14:21.674938] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303415 ] 00:07:19.819 [2024-12-09 13:14:21.873663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.819 [2024-12-09 13:14:21.906971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.819 [2024-12-09 13:14:21.966236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.819 [2024-12-09 13:14:21.982554] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:19.819 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.819 INFO: Seed: 2918024952 00:07:19.819 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:19.819 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:19.819 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:19.819 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.819 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.819 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.819 This may also happen if the target rejected all inputs we tried so far 00:07:19.819 [2024-12-09 13:14:22.038388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.819 [2024-12-09 13:14:22.038425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.819 [2024-12-09 13:14:22.038493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.819 [2024-12-09 13:14:22.038513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.819 [2024-12-09 13:14:22.038573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.819 [2024-12-09 13:14:22.038599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.819 [2024-12-09 13:14:22.038662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.819 [2024-12-09 13:14:22.038680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.819 [2024-12-09 13:14:22.038746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:19.819 [2024-12-09 13:14:22.038764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.339 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:20.339 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.339 #3 NEW cov: 12242 ft: 12241 corp: 2/91b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:20.339 [2024-12-09 13:14:22.369636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.339 [2024-12-09 13:14:22.369706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.339 [2024-12-09 13:14:22.369808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.339 [2024-12-09 13:14:22.369844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.339 [2024-12-09 13:14:22.369937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.339 [2024-12-09 13:14:22.369969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.339 [2024-12-09 13:14:22.370064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.339 [2024-12-09 13:14:22.370099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.339 [2024-12-09 13:14:22.370196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.339 [2024-12-09 13:14:22.370230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.339 #4 NEW cov: 12355 ft: 12995 corp: 3/181b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:07:20.340 [2024-12-09 13:14:22.439399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.340 [2024-12-09 13:14:22.439427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.439479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.340 [2024-12-09 13:14:22.439496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.439556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.340 [2024-12-09 13:14:22.439572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.439636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.340 [2024-12-09 13:14:22.439653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.439710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.340 [2024-12-09 13:14:22.439725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.340 #5 NEW cov: 12361 ft: 13303 corp: 4/271b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:07:20.340 [2024-12-09 13:14:22.499028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.340 [2024-12-09 13:14:22.499055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.499095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.340 [2024-12-09 13:14:22.499111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.340 #10 NEW cov: 12446 ft: 14005 corp: 5/312b lim: 90 exec/s: 0 rss: 72Mb L: 41/90 MS: 5 InsertByte-ChangeBinInt-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:20.340 [2024-12-09 13:14:22.539622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.340 [2024-12-09 13:14:22.539652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.539700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.340 [2024-12-09 13:14:22.539718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.539779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.340 [2024-12-09 13:14:22.539796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.539855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.340 [2024-12-09 13:14:22.539871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.340 [2024-12-09 13:14:22.539928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.340 [2024-12-09 13:14:22.539944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.340 #11 NEW cov: 12446 ft: 14085 corp: 6/402b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBit- 00:07:20.600 [2024-12-09 13:14:22.599180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.600 [2024-12-09 13:14:22.599208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.600 #12 NEW cov: 12446 ft: 14916 corp: 7/425b lim: 90 exec/s: 0 rss: 72Mb L: 23/90 MS: 1 InsertRepeatedBytes- 00:07:20.600 [2024-12-09 13:14:22.639473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.600 [2024-12-09 13:14:22.639501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.639556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.600 [2024-12-09 13:14:22.639583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.600 #13 NEW cov: 12446 ft: 14998 corp: 8/466b lim: 90 exec/s: 0 rss: 72Mb L: 41/90 MS: 1 CopyPart- 00:07:20.600 [2024-12-09 13:14:22.699646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.600 [2024-12-09 13:14:22.699673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.699714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.600 [2024-12-09 13:14:22.699730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.600 #14 NEW cov: 12446 ft: 15099 corp: 9/514b lim: 90 exec/s: 0 rss: 72Mb L: 48/90 MS: 1 EraseBytes- 00:07:20.600 [2024-12-09 13:14:22.739584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.600 [2024-12-09 13:14:22.739616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.600 #15 NEW cov: 12446 ft: 15175 corp: 10/537b lim: 90 exec/s: 0 rss: 72Mb L: 23/90 MS: 1 ChangeBit- 00:07:20.600 [2024-12-09 13:14:22.800391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.600 [2024-12-09 13:14:22.800419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.800468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.600 [2024-12-09 13:14:22.800486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.800545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.600 [2024-12-09 13:14:22.800561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.800623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.600 [2024-12-09 13:14:22.800638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.600 [2024-12-09 13:14:22.800700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.600 [2024-12-09 13:14:22.800716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.600 #16 NEW cov: 12446 ft: 15232 corp: 11/627b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:20.861 [2024-12-09 13:14:22.860075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.861 [2024-12-09 13:14:22.860104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.860144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.861 [2024-12-09 13:14:22.860161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.861 #17 NEW cov: 12446 ft: 15279 corp: 12/675b lim: 90 exec/s: 0 rss: 73Mb L: 48/90 MS: 1 ChangeBit- 00:07:20.861 [2024-12-09 13:14:22.920719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.861 [2024-12-09 13:14:22.920747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.920800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.861 [2024-12-09 13:14:22.920816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.920889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.861 [2024-12-09 13:14:22.920906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.920966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.861 [2024-12-09 13:14:22.920982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.921040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.861 [2024-12-09 13:14:22.921056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.861 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:20.861 #18 NEW cov: 12469 ft: 15401 corp: 13/765b lim: 90 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:20.861 [2024-12-09 13:14:22.960358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.861 [2024-12-09 13:14:22.960385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:22.960448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.861 [2024-12-09 13:14:22.960464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.861 #19 NEW cov: 12469 ft: 15430 corp: 14/813b lim: 90 exec/s: 0 rss: 73Mb L: 48/90 MS: 1 ChangeByte- 00:07:20.861 [2024-12-09 13:14:23.020985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.861 [2024-12-09 13:14:23.021013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:23.021067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.861 [2024-12-09 13:14:23.021083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:23.021142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.861 [2024-12-09 13:14:23.021157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:23.021217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:20.861 [2024-12-09 13:14:23.021234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:23.021294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:20.861 [2024-12-09 13:14:23.021309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.861 #20 NEW cov: 12469 ft: 15489 corp: 15/903b lim: 90 exec/s: 20 rss: 73Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:20.861 [2024-12-09 13:14:23.060637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.861 [2024-12-09 13:14:23.060664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.861 [2024-12-09 13:14:23.060704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.861 [2024-12-09 13:14:23.060720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.861 #21 NEW cov: 12469 ft: 15513 corp: 16/955b lim: 90 exec/s: 21 rss: 73Mb L: 52/90 MS: 1 CMP- DE: "\000\000\000\366"- 00:07:21.121 [2024-12-09 13:14:23.121252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.121 [2024-12-09 13:14:23.121279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.121337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.121 [2024-12-09 13:14:23.121352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.121410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.121 [2024-12-09 13:14:23.121426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.121483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.121 [2024-12-09 13:14:23.121498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.121557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.121 [2024-12-09 13:14:23.121574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.121 #22 NEW cov: 12469 ft: 15560 corp: 17/1045b lim: 90 exec/s: 22 rss: 73Mb L: 90/90 MS: 1 ChangeBit- 00:07:21.121 [2024-12-09 13:14:23.161382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.121 [2024-12-09 13:14:23.161409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.161462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.121 [2024-12-09 13:14:23.161478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.161537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.121 [2024-12-09 13:14:23.161552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.161617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.121 [2024-12-09 13:14:23.161634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.161692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.121 [2024-12-09 13:14:23.161708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.121 #23 NEW cov: 12469 ft: 15603 corp: 18/1135b lim: 90 exec/s: 23 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:07:21.121 [2024-12-09 13:14:23.201032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.121 [2024-12-09 13:14:23.201059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.201117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.121 [2024-12-09 13:14:23.201133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.121 #24 NEW cov: 12469 ft: 15640 corp: 19/1183b lim: 90 exec/s: 24 rss: 73Mb L: 48/90 MS: 1 ChangeByte- 00:07:21.121 [2024-12-09 13:14:23.241651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.121 [2024-12-09 13:14:23.241678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.241735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.121 [2024-12-09 13:14:23.241750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.241807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.121 [2024-12-09 13:14:23.241823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.241881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.121 [2024-12-09 13:14:23.241898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.121 [2024-12-09 13:14:23.241957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.121 [2024-12-09 13:14:23.241973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.121 #25 NEW cov: 12469 ft: 15655 corp: 20/1273b lim: 90 exec/s: 25 rss: 73Mb L: 90/90 MS: 1 PersAutoDict- DE: "\000\000\000\366"- 00:07:21.122 [2024-12-09 13:14:23.301812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.122 [2024-12-09 13:14:23.301845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.301889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.122 [2024-12-09 13:14:23.301905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.301966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.122 [2024-12-09 13:14:23.301982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.302039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.122 [2024-12-09 13:14:23.302054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.302113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.122 [2024-12-09 13:14:23.302128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.122 #26 NEW cov: 12469 ft: 15665 corp: 21/1363b lim: 90 exec/s: 26 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:07:21.122 [2024-12-09 13:14:23.362001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.122 [2024-12-09 13:14:23.362029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.362087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.122 [2024-12-09 13:14:23.362103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.362164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.122 [2024-12-09 13:14:23.362181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.362240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.122 [2024-12-09 13:14:23.362256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.122 [2024-12-09 13:14:23.362315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.122 [2024-12-09 13:14:23.362331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.381 #27 NEW cov: 12469 ft: 15690 corp: 22/1453b lim: 90 exec/s: 27 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:21.381 [2024-12-09 13:14:23.422155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.381 [2024-12-09 13:14:23.422182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.381 [2024-12-09 13:14:23.422240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.381 [2024-12-09 13:14:23.422256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.381 [2024-12-09 13:14:23.422313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.381 [2024-12-09 13:14:23.422330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.381 [2024-12-09 13:14:23.422389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.381 [2024-12-09 13:14:23.422407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.381 [2024-12-09 13:14:23.422466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.381 [2024-12-09 13:14:23.422482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.381 #28 NEW cov: 12469 ft: 15702 corp: 23/1543b lim: 90 exec/s: 28 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:07:21.381 [2024-12-09 13:14:23.482330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.381 [2024-12-09 13:14:23.482358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.381 [2024-12-09 13:14:23.482416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.381 [2024-12-09 13:14:23.482432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.482503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.382 [2024-12-09 13:14:23.482520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.482578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.382 [2024-12-09 13:14:23.482599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.482657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.382 [2024-12-09 13:14:23.482674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.382 #29 NEW cov: 12469 ft: 15716 corp: 24/1633b lim: 90 exec/s: 29 rss: 73Mb L: 90/90 MS: 1 CMP- DE: "Q5]\313F\012\000\000"- 00:07:21.382 [2024-12-09 13:14:23.521803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.382 [2024-12-09 13:14:23.521830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.382 #30 NEW cov: 12469 ft: 15758 corp: 25/1667b lim: 90 exec/s: 30 rss: 73Mb L: 34/90 MS: 1 CopyPart- 00:07:21.382 [2024-12-09 13:14:23.562541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.382 [2024-12-09 13:14:23.562568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.562632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.382 [2024-12-09 13:14:23.562650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.562710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.382 [2024-12-09 13:14:23.562726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.562787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.382 [2024-12-09 13:14:23.562805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.562866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.382 [2024-12-09 13:14:23.562882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.382 #31 NEW cov: 12469 ft: 15844 corp: 26/1757b lim: 90 exec/s: 31 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:07:21.382 [2024-12-09 13:14:23.622709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.382 [2024-12-09 13:14:23.622736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.622798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.382 [2024-12-09 13:14:23.622814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.622875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.382 [2024-12-09 13:14:23.622890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.622948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.382 [2024-12-09 13:14:23.622964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.382 [2024-12-09 13:14:23.623026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.382 [2024-12-09 13:14:23.623041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.642 #32 NEW cov: 12469 ft: 15863 corp: 27/1847b lim: 90 exec/s: 32 rss: 73Mb L: 90/90 MS: 1 CMP- DE: "\005\000\000\000\000\000\000\000"- 00:07:21.642 [2024-12-09 13:14:23.662205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.642 [2024-12-09 13:14:23.662233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.642 #33 NEW cov: 12469 ft: 15880 corp: 28/1876b lim: 90 exec/s: 33 rss: 74Mb L: 29/90 MS: 1 EraseBytes- 00:07:21.642 [2024-12-09 13:14:23.722521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.642 [2024-12-09 13:14:23.722548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.722606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.642 [2024-12-09 13:14:23.722638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.642 #34 NEW cov: 12469 ft: 15886 corp: 29/1928b lim: 90 exec/s: 34 rss: 74Mb L: 52/90 MS: 1 CopyPart- 00:07:21.642 [2024-12-09 13:14:23.782509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.642 [2024-12-09 13:14:23.782537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.642 #35 NEW cov: 12469 ft: 15904 corp: 30/1962b lim: 90 exec/s: 35 rss: 74Mb L: 34/90 MS: 1 ChangeBinInt- 00:07:21.642 [2024-12-09 13:14:23.843324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.642 [2024-12-09 13:14:23.843352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.843425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.642 [2024-12-09 13:14:23.843442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.843501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.642 [2024-12-09 13:14:23.843517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.843572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.642 [2024-12-09 13:14:23.843594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.843652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.642 [2024-12-09 13:14:23.843667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.883484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.642 [2024-12-09 13:14:23.883511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.883572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.642 [2024-12-09 13:14:23.883590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.883650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.642 [2024-12-09 13:14:23.883668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.883726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.642 [2024-12-09 13:14:23.883742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.642 [2024-12-09 13:14:23.883800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.642 [2024-12-09 13:14:23.883816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.902 #37 NEW cov: 12469 ft: 15939 corp: 31/2052b lim: 90 exec/s: 37 rss: 74Mb L: 90/90 MS: 2 PersAutoDict-ChangeByte- DE: "\005\000\000\000\000\000\000\000"- 00:07:21.902 [2024-12-09 13:14:23.923535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-12-09 13:14:23.923562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.923635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-12-09 13:14:23.923653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.923714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.902 [2024-12-09 13:14:23.923740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.923802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.902 [2024-12-09 13:14:23.923817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.923877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.902 [2024-12-09 13:14:23.923893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.902 #38 NEW cov: 12469 ft: 15949 corp: 32/2142b lim: 90 exec/s: 38 rss: 74Mb L: 90/90 MS: 1 CMP- DE: "\377\377\377\377\000\000\000\000"- 00:07:21.902 [2024-12-09 13:14:23.963379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-12-09 13:14:23.963407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.963466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-12-09 13:14:23.963485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:23.963548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.902 [2024-12-09 13:14:23.963563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.902 #41 NEW cov: 12469 ft: 16219 corp: 33/2196b lim: 90 exec/s: 41 rss: 74Mb L: 54/90 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:21.902 [2024-12-09 13:14:24.003767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-12-09 13:14:24.003794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:24.003853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-12-09 13:14:24.003869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:24.003926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.902 [2024-12-09 13:14:24.003942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:24.003999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.902 [2024-12-09 13:14:24.004016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.902 [2024-12-09 13:14:24.004075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:21.902 [2024-12-09 13:14:24.004090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.902 #42 NEW cov: 12469 ft: 16245 corp: 34/2286b lim: 90 exec/s: 21 rss: 74Mb L: 90/90 MS: 1 CopyPart- 00:07:21.902 #42 DONE cov: 12469 ft: 16245 corp: 34/2286b lim: 90 exec/s: 21 rss: 74Mb 00:07:21.902 ###### Recommended dictionary. ###### 00:07:21.902 "\000\000\000\366" # Uses: 1 00:07:21.902 "Q5]\313F\012\000\000" # Uses: 0 00:07:21.902 "\005\000\000\000\000\000\000\000" # Uses: 1 00:07:21.902 "\377\377\377\377\000\000\000\000" # Uses: 0 00:07:21.902 ###### End of recommended dictionary. ###### 00:07:21.902 Done 42 runs in 2 second(s) 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.162 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:22.162 [2024-12-09 13:14:24.196480] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:22.163 [2024-12-09 13:14:24.196546] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303743 ] 00:07:22.163 [2024-12-09 13:14:24.394228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.423 [2024-12-09 13:14:24.432741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.423 [2024-12-09 13:14:24.491766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.423 [2024-12-09 13:14:24.508062] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:22.423 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.423 INFO: Seed: 1148073142 00:07:22.423 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:22.423 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:22.423 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.423 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.423 #2 INITED exec/s: 0 rss: 65Mb 00:07:22.423 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.423 This may also happen if the target rejected all inputs we tried so far 00:07:22.423 [2024-12-09 13:14:24.577668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.423 [2024-12-09 13:14:24.577721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.683 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:22.683 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.683 #8 NEW cov: 12217 ft: 12204 corp: 2/16b lim: 50 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:22.683 [2024-12-09 13:14:24.908554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.683 [2024-12-09 13:14:24.908604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 #9 NEW cov: 12330 ft: 12932 corp: 3/31b lim: 50 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:22.944 [2024-12-09 13:14:24.968713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.944 [2024-12-09 13:14:24.968743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 #10 NEW cov: 12336 ft: 13090 corp: 4/46b lim: 50 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ChangeBit- 00:07:22.944 [2024-12-09 13:14:25.028830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.944 [2024-12-09 13:14:25.028856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 #11 NEW cov: 12421 ft: 13507 corp: 5/61b lim: 50 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ChangeByte- 00:07:22.944 [2024-12-09 13:14:25.069223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.944 [2024-12-09 13:14:25.069252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 [2024-12-09 13:14:25.069374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.944 [2024-12-09 13:14:25.069395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.944 #12 NEW cov: 12421 ft: 14412 corp: 6/82b lim: 50 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 CopyPart- 00:07:22.944 [2024-12-09 13:14:25.129015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.944 [2024-12-09 13:14:25.129044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 #13 NEW cov: 12421 ft: 14441 corp: 7/97b lim: 50 exec/s: 0 rss: 72Mb L: 15/21 MS: 1 ChangeBit- 00:07:22.944 [2024-12-09 13:14:25.169723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.944 [2024-12-09 13:14:25.169756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.944 [2024-12-09 13:14:25.169882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.944 [2024-12-09 13:14:25.169899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.944 [2024-12-09 13:14:25.170031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:22.944 [2024-12-09 13:14:25.170052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.203 #18 NEW cov: 12421 ft: 14793 corp: 8/127b lim: 50 exec/s: 0 rss: 72Mb L: 30/30 MS: 5 CrossOver-ShuffleBytes-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:23.203 [2024-12-09 13:14:25.209276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.203 [2024-12-09 13:14:25.209305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.203 #19 NEW cov: 12421 ft: 14813 corp: 9/142b lim: 50 exec/s: 0 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:23.203 [2024-12-09 13:14:25.269685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.203 [2024-12-09 13:14:25.269713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.203 [2024-12-09 13:14:25.269839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.203 [2024-12-09 13:14:25.269864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.203 #20 NEW cov: 12421 ft: 14828 corp: 10/162b lim: 50 exec/s: 0 rss: 72Mb L: 20/30 MS: 1 CrossOver- 00:07:23.203 [2024-12-09 13:14:25.329690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.203 [2024-12-09 13:14:25.329724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.203 #21 NEW cov: 12421 ft: 14867 corp: 11/177b lim: 50 exec/s: 0 rss: 72Mb L: 15/30 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:07:23.203 [2024-12-09 13:14:25.369985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.203 [2024-12-09 13:14:25.370020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.203 [2024-12-09 13:14:25.370139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.203 [2024-12-09 13:14:25.370164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.203 #22 NEW cov: 12421 ft: 14912 corp: 12/198b lim: 50 exec/s: 0 rss: 73Mb L: 21/30 MS: 1 ChangeBinInt- 00:07:23.203 [2024-12-09 13:14:25.439943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.203 [2024-12-09 13:14:25.439977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.463 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:23.463 #23 NEW cov: 12444 ft: 14973 corp: 13/213b lim: 50 exec/s: 0 rss: 73Mb L: 15/30 MS: 1 ChangeBit- 00:07:23.463 [2024-12-09 13:14:25.480034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.463 [2024-12-09 13:14:25.480059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.463 #24 NEW cov: 12444 ft: 15016 corp: 14/228b lim: 50 exec/s: 0 rss: 73Mb L: 15/30 MS: 1 CopyPart- 00:07:23.463 [2024-12-09 13:14:25.540171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.463 [2024-12-09 13:14:25.540196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.463 #25 NEW cov: 12444 ft: 15033 corp: 15/242b lim: 50 exec/s: 25 rss: 73Mb L: 14/30 MS: 1 EraseBytes- 00:07:23.463 [2024-12-09 13:14:25.580287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.463 [2024-12-09 13:14:25.580317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.463 #28 NEW cov: 12444 ft: 15043 corp: 16/254b lim: 50 exec/s: 28 rss: 73Mb L: 12/30 MS: 3 EraseBytes-EraseBytes-PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:07:23.463 [2024-12-09 13:14:25.630533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.463 [2024-12-09 13:14:25.630560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.463 #29 NEW cov: 12444 ft: 15071 corp: 17/266b lim: 50 exec/s: 29 rss: 73Mb L: 12/30 MS: 1 ShuffleBytes- 00:07:23.463 [2024-12-09 13:14:25.700625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.463 [2024-12-09 13:14:25.700658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.723 #30 NEW cov: 12444 ft: 15187 corp: 18/281b lim: 50 exec/s: 30 rss: 73Mb L: 15/30 MS: 1 CopyPart- 00:07:23.723 [2024-12-09 13:14:25.780320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.723 [2024-12-09 13:14:25.780354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.723 #31 NEW cov: 12444 ft: 15336 corp: 19/297b lim: 50 exec/s: 31 rss: 73Mb L: 16/30 MS: 1 InsertByte- 00:07:23.723 [2024-12-09 13:14:25.840894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.723 [2024-12-09 13:14:25.840923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.723 [2024-12-09 13:14:25.840962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.723 [2024-12-09 13:14:25.840978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.723 [2024-12-09 13:14:25.841031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.723 [2024-12-09 13:14:25.841046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.723 [2024-12-09 13:14:25.841107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.723 [2024-12-09 13:14:25.841122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.723 #32 NEW cov: 12444 ft: 15727 corp: 20/344b lim: 50 exec/s: 32 rss: 73Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:23.723 [2024-12-09 13:14:25.880602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.723 [2024-12-09 13:14:25.880629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.723 #33 NEW cov: 12444 ft: 15827 corp: 21/359b lim: 50 exec/s: 33 rss: 73Mb L: 15/47 MS: 1 ShuffleBytes- 00:07:23.723 [2024-12-09 13:14:25.920996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.723 [2024-12-09 13:14:25.921024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.723 [2024-12-09 13:14:25.921063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.723 [2024-12-09 13:14:25.921078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.724 [2024-12-09 13:14:25.921132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.724 [2024-12-09 13:14:25.921149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.724 #34 NEW cov: 12444 ft: 15883 corp: 22/393b lim: 50 exec/s: 34 rss: 73Mb L: 34/47 MS: 1 CrossOver- 00:07:23.983 [2024-12-09 13:14:25.980856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.984 [2024-12-09 13:14:25.980884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.984 #35 NEW cov: 12444 ft: 15925 corp: 23/409b lim: 50 exec/s: 35 rss: 73Mb L: 16/47 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:07:23.984 [2024-12-09 13:14:26.041052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.984 [2024-12-09 13:14:26.041080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.984 #39 NEW cov: 12444 ft: 15959 corp: 24/423b lim: 50 exec/s: 39 rss: 73Mb L: 14/47 MS: 4 EraseBytes-ChangeByte-ChangeBit-PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:07:23.984 [2024-12-09 13:14:26.101341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.984 [2024-12-09 13:14:26.101367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.984 [2024-12-09 13:14:26.101416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.984 [2024-12-09 13:14:26.101433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.984 #40 NEW cov: 12444 ft: 15978 corp: 25/443b lim: 50 exec/s: 40 rss: 73Mb L: 20/47 MS: 1 ShuffleBytes- 00:07:23.984 [2024-12-09 13:14:26.161361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.984 [2024-12-09 13:14:26.161390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.984 #41 NEW cov: 12444 ft: 15990 corp: 26/458b lim: 50 exec/s: 41 rss: 73Mb L: 15/47 MS: 1 ChangeByte- 00:07:23.984 [2024-12-09 13:14:26.201502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.984 [2024-12-09 13:14:26.201530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.244 #42 NEW cov: 12444 ft: 15994 corp: 27/476b lim: 50 exec/s: 42 rss: 73Mb L: 18/47 MS: 1 CMP- DE: "\376\377\377\377"- 00:07:24.244 [2024-12-09 13:14:26.261684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.244 [2024-12-09 13:14:26.261711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.244 #43 NEW cov: 12444 ft: 16011 corp: 28/490b lim: 50 exec/s: 43 rss: 73Mb L: 14/47 MS: 1 EraseBytes- 00:07:24.244 [2024-12-09 13:14:26.321976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.244 [2024-12-09 13:14:26.322003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.244 [2024-12-09 13:14:26.322058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.244 [2024-12-09 13:14:26.322076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.244 #44 NEW cov: 12444 ft: 16031 corp: 29/510b lim: 50 exec/s: 44 rss: 73Mb L: 20/47 MS: 1 ChangeBit- 00:07:24.244 [2024-12-09 13:14:26.361926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.244 [2024-12-09 13:14:26.361954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.244 #46 NEW cov: 12444 ft: 16102 corp: 30/523b lim: 50 exec/s: 46 rss: 74Mb L: 13/47 MS: 2 EraseBytes-PersAutoDict- DE: "\376\377\377\377"- 00:07:24.244 [2024-12-09 13:14:26.422221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.244 [2024-12-09 13:14:26.422249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.244 [2024-12-09 13:14:26.422316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.244 [2024-12-09 13:14:26.422333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.244 #47 NEW cov: 12444 ft: 16118 corp: 31/547b lim: 50 exec/s: 47 rss: 74Mb L: 24/47 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:07:24.244 [2024-12-09 13:14:26.462234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.244 [2024-12-09 13:14:26.462262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.505 #48 NEW cov: 12444 ft: 16141 corp: 32/560b lim: 50 exec/s: 48 rss: 74Mb L: 13/47 MS: 1 CopyPart- 00:07:24.505 [2024-12-09 13:14:26.522413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.505 [2024-12-09 13:14:26.522441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.505 #49 NEW cov: 12444 ft: 16162 corp: 33/574b lim: 50 exec/s: 24 rss: 74Mb L: 14/47 MS: 1 CrossOver- 00:07:24.505 #49 DONE cov: 12444 ft: 16162 corp: 33/574b lim: 50 exec/s: 24 rss: 74Mb 00:07:24.505 ###### Recommended dictionary. ###### 00:07:24.505 "\017\000\000\000\000\000\000\000" # Uses: 3 00:07:24.505 "\376\377\377\377" # Uses: 2 00:07:24.505 ###### End of recommended dictionary. ###### 00:07:24.505 Done 49 runs in 2 second(s) 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.505 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:24.506 [2024-12-09 13:14:26.716098] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:24.506 [2024-12-09 13:14:26.716166] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304241 ] 00:07:24.771 [2024-12-09 13:14:26.917159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.771 [2024-12-09 13:14:26.950446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.771 [2024-12-09 13:14:27.009561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.037 [2024-12-09 13:14:27.025869] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:25.037 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.037 INFO: Seed: 3667075084 00:07:25.037 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:25.037 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:25.037 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:25.037 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.037 #2 INITED exec/s: 0 rss: 65Mb 00:07:25.037 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.037 This may also happen if the target rejected all inputs we tried so far 00:07:25.037 [2024-12-09 13:14:27.081249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.037 [2024-12-09 13:14:27.081281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.037 [2024-12-09 13:14:27.081355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.037 [2024-12-09 13:14:27.081373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.296 NEW_FUNC[1/718]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:25.296 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.296 #12 NEW cov: 12243 ft: 12240 corp: 2/44b lim: 85 exec/s: 0 rss: 72Mb L: 43/43 MS: 5 InsertByte-CopyPart-EraseBytes-CrossOver-InsertRepeatedBytes- 00:07:25.296 [2024-12-09 13:14:27.412343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.296 [2024-12-09 13:14:27.412402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.296 [2024-12-09 13:14:27.412486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.296 [2024-12-09 13:14:27.412516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.296 #13 NEW cov: 12356 ft: 12932 corp: 3/88b lim: 85 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertByte- 00:07:25.296 [2024-12-09 13:14:27.482208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.296 [2024-12-09 13:14:27.482236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.296 [2024-12-09 13:14:27.482286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.296 [2024-12-09 13:14:27.482301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.296 #14 NEW cov: 12362 ft: 13176 corp: 4/132b lim: 85 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertByte- 00:07:25.296 [2024-12-09 13:14:27.522142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.296 [2024-12-09 13:14:27.522171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 #16 NEW cov: 12447 ft: 14162 corp: 5/157b lim: 85 exec/s: 0 rss: 72Mb L: 25/44 MS: 2 ShuffleBytes-CrossOver- 00:07:25.554 [2024-12-09 13:14:27.562266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.554 [2024-12-09 13:14:27.562294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 #17 NEW cov: 12447 ft: 14335 corp: 6/180b lim: 85 exec/s: 0 rss: 72Mb L: 23/44 MS: 1 EraseBytes- 00:07:25.554 [2024-12-09 13:14:27.622578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.554 [2024-12-09 13:14:27.622611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 [2024-12-09 13:14:27.622669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.554 [2024-12-09 13:14:27.622696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.554 #18 NEW cov: 12447 ft: 14384 corp: 7/224b lim: 85 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ShuffleBytes- 00:07:25.554 [2024-12-09 13:14:27.682768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.554 [2024-12-09 13:14:27.682795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 [2024-12-09 13:14:27.682849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.554 [2024-12-09 13:14:27.682865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.554 #19 NEW cov: 12447 ft: 14499 corp: 8/272b lim: 85 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 CopyPart- 00:07:25.554 [2024-12-09 13:14:27.722898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.554 [2024-12-09 13:14:27.722925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 [2024-12-09 13:14:27.722978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.554 [2024-12-09 13:14:27.722993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.554 #20 NEW cov: 12447 ft: 14570 corp: 9/316b lim: 85 exec/s: 0 rss: 72Mb L: 44/48 MS: 1 InsertByte- 00:07:25.554 [2024-12-09 13:14:27.762962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.554 [2024-12-09 13:14:27.762988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.554 [2024-12-09 13:14:27.763046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.554 [2024-12-09 13:14:27.763061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.554 #21 NEW cov: 12447 ft: 14583 corp: 10/360b lim: 85 exec/s: 0 rss: 72Mb L: 44/48 MS: 1 CrossOver- 00:07:25.813 [2024-12-09 13:14:27.802971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.813 [2024-12-09 13:14:27.802998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.813 #22 NEW cov: 12447 ft: 14669 corp: 11/386b lim: 85 exec/s: 0 rss: 72Mb L: 26/48 MS: 1 InsertByte- 00:07:25.813 [2024-12-09 13:14:27.843193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.813 [2024-12-09 13:14:27.843220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.813 [2024-12-09 13:14:27.843262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.813 [2024-12-09 13:14:27.843279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.813 #28 NEW cov: 12447 ft: 14705 corp: 12/427b lim: 85 exec/s: 0 rss: 72Mb L: 41/48 MS: 1 EraseBytes- 00:07:25.813 [2024-12-09 13:14:27.883164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.813 [2024-12-09 13:14:27.883190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.813 #29 NEW cov: 12447 ft: 14745 corp: 13/453b lim: 85 exec/s: 0 rss: 72Mb L: 26/48 MS: 1 ShuffleBytes- 00:07:25.813 [2024-12-09 13:14:27.943492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.813 [2024-12-09 13:14:27.943518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.813 [2024-12-09 13:14:27.943573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.813 [2024-12-09 13:14:27.943592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.813 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:25.813 #30 NEW cov: 12470 ft: 14865 corp: 14/494b lim: 85 exec/s: 0 rss: 73Mb L: 41/48 MS: 1 ChangeByte- 00:07:25.813 [2024-12-09 13:14:28.003664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.813 [2024-12-09 13:14:28.003691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.813 [2024-12-09 13:14:28.003751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.813 [2024-12-09 13:14:28.003765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.813 #31 NEW cov: 12470 ft: 14923 corp: 15/538b lim: 85 exec/s: 0 rss: 73Mb L: 44/48 MS: 1 ChangeBit- 00:07:26.072 [2024-12-09 13:14:28.063851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.072 [2024-12-09 13:14:28.063878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.063950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.072 [2024-12-09 13:14:28.063966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.072 #32 NEW cov: 12470 ft: 14948 corp: 16/584b lim: 85 exec/s: 32 rss: 73Mb L: 46/48 MS: 1 CrossOver- 00:07:26.072 [2024-12-09 13:14:28.124012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.072 [2024-12-09 13:14:28.124039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.124079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.072 [2024-12-09 13:14:28.124095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.072 #33 NEW cov: 12470 ft: 14963 corp: 17/618b lim: 85 exec/s: 33 rss: 73Mb L: 34/48 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\020"- 00:07:26.072 [2024-12-09 13:14:28.164082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.072 [2024-12-09 13:14:28.164109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.164146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.072 [2024-12-09 13:14:28.164162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.072 #34 NEW cov: 12470 ft: 14976 corp: 18/662b lim: 85 exec/s: 34 rss: 73Mb L: 44/48 MS: 1 ChangeBit- 00:07:26.072 [2024-12-09 13:14:28.224415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.072 [2024-12-09 13:14:28.224441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.224494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.072 [2024-12-09 13:14:28.224508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.224562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.072 [2024-12-09 13:14:28.224578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.072 #35 NEW cov: 12470 ft: 15334 corp: 19/720b lim: 85 exec/s: 35 rss: 73Mb L: 58/58 MS: 1 InsertRepeatedBytes- 00:07:26.072 [2024-12-09 13:14:28.264392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.072 [2024-12-09 13:14:28.264418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.072 [2024-12-09 13:14:28.264471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.072 [2024-12-09 13:14:28.264488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.072 #36 NEW cov: 12470 ft: 15412 corp: 20/768b lim: 85 exec/s: 36 rss: 73Mb L: 48/58 MS: 1 CopyPart- 00:07:26.331 [2024-12-09 13:14:28.324464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.324491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 #37 NEW cov: 12470 ft: 15452 corp: 21/791b lim: 85 exec/s: 37 rss: 73Mb L: 23/58 MS: 1 ChangeBit- 00:07:26.331 [2024-12-09 13:14:28.384603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.384633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 #43 NEW cov: 12470 ft: 15480 corp: 22/817b lim: 85 exec/s: 43 rss: 73Mb L: 26/58 MS: 1 CopyPart- 00:07:26.331 [2024-12-09 13:14:28.425090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.425118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.425163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.331 [2024-12-09 13:14:28.425178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.425232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.331 [2024-12-09 13:14:28.425247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.425301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:26.331 [2024-12-09 13:14:28.425316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.331 #44 NEW cov: 12470 ft: 15860 corp: 23/899b lim: 85 exec/s: 44 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:07:26.331 [2024-12-09 13:14:28.464927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.464954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.465005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.331 [2024-12-09 13:14:28.465022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.331 #45 NEW cov: 12470 ft: 15866 corp: 24/943b lim: 85 exec/s: 45 rss: 73Mb L: 44/82 MS: 1 ChangeBinInt- 00:07:26.331 [2024-12-09 13:14:28.525136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.525163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.525204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.331 [2024-12-09 13:14:28.525219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.331 #46 NEW cov: 12470 ft: 15895 corp: 25/991b lim: 85 exec/s: 46 rss: 73Mb L: 48/82 MS: 1 ChangeBinInt- 00:07:26.331 [2024-12-09 13:14:28.565214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.331 [2024-12-09 13:14:28.565240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.331 [2024-12-09 13:14:28.565293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.331 [2024-12-09 13:14:28.565310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.591 #47 NEW cov: 12470 ft: 15915 corp: 26/1029b lim: 85 exec/s: 47 rss: 73Mb L: 38/82 MS: 1 CMP- DE: "\001\000\000\006"- 00:07:26.591 [2024-12-09 13:14:28.625355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.591 [2024-12-09 13:14:28.625382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.591 [2024-12-09 13:14:28.625446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.591 [2024-12-09 13:14:28.625465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.591 #48 NEW cov: 12470 ft: 15931 corp: 27/1074b lim: 85 exec/s: 48 rss: 73Mb L: 45/82 MS: 1 InsertByte- 00:07:26.591 [2024-12-09 13:14:28.685529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.591 [2024-12-09 13:14:28.685557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.591 [2024-12-09 13:14:28.685619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.591 [2024-12-09 13:14:28.685635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.591 #49 NEW cov: 12470 ft: 15950 corp: 28/1112b lim: 85 exec/s: 49 rss: 74Mb L: 38/82 MS: 1 ChangeBinInt- 00:07:26.591 [2024-12-09 13:14:28.745684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.591 [2024-12-09 13:14:28.745720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.591 [2024-12-09 13:14:28.745778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.591 [2024-12-09 13:14:28.745795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.591 #50 NEW cov: 12470 ft: 15954 corp: 29/1156b lim: 85 exec/s: 50 rss: 74Mb L: 44/82 MS: 1 ChangeBit- 00:07:26.591 [2024-12-09 13:14:28.785788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.591 [2024-12-09 13:14:28.785814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.591 [2024-12-09 13:14:28.785853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.591 [2024-12-09 13:14:28.785870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.591 #51 NEW cov: 12470 ft: 15957 corp: 30/1198b lim: 85 exec/s: 51 rss: 74Mb L: 42/82 MS: 1 EraseBytes- 00:07:26.851 [2024-12-09 13:14:28.846119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.851 [2024-12-09 13:14:28.846147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.851 [2024-12-09 13:14:28.846184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.851 [2024-12-09 13:14:28.846200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.851 [2024-12-09 13:14:28.846253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.851 [2024-12-09 13:14:28.846268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.852 #52 NEW cov: 12470 ft: 16021 corp: 31/1254b lim: 85 exec/s: 52 rss: 74Mb L: 56/82 MS: 1 CrossOver- 00:07:26.852 [2024-12-09 13:14:28.905981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.852 [2024-12-09 13:14:28.906008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.852 #53 NEW cov: 12470 ft: 16034 corp: 32/1284b lim: 85 exec/s: 53 rss: 74Mb L: 30/82 MS: 1 InsertRepeatedBytes- 00:07:26.852 [2024-12-09 13:14:28.946085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.852 [2024-12-09 13:14:28.946112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.852 #54 NEW cov: 12470 ft: 16043 corp: 33/1310b lim: 85 exec/s: 54 rss: 74Mb L: 26/82 MS: 1 CopyPart- 00:07:26.852 [2024-12-09 13:14:29.006300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.852 [2024-12-09 13:14:29.006327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.852 #55 NEW cov: 12470 ft: 16056 corp: 34/1337b lim: 85 exec/s: 55 rss: 74Mb L: 27/82 MS: 1 InsertByte- 00:07:26.852 [2024-12-09 13:14:29.066567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.852 [2024-12-09 13:14:29.066598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.852 [2024-12-09 13:14:29.066651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.852 [2024-12-09 13:14:29.066667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.112 #56 NEW cov: 12470 ft: 16057 corp: 35/1382b lim: 85 exec/s: 28 rss: 74Mb L: 45/82 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:07:27.112 #56 DONE cov: 12470 ft: 16057 corp: 35/1382b lim: 85 exec/s: 28 rss: 74Mb 00:07:27.112 ###### Recommended dictionary. ###### 00:07:27.112 "\001\000\000\000\000\000\000\020" # Uses: 0 00:07:27.112 "\001\000\000\006" # Uses: 1 00:07:27.112 ###### End of recommended dictionary. ###### 00:07:27.112 Done 56 runs in 2 second(s) 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.112 13:14:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:27.112 [2024-12-09 13:14:29.259254] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:27.113 [2024-12-09 13:14:29.259346] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304768 ] 00:07:27.373 [2024-12-09 13:14:29.460307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.373 [2024-12-09 13:14:29.494130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.373 [2024-12-09 13:14:29.553159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.373 [2024-12-09 13:14:29.569477] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:27.373 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.373 INFO: Seed: 1916100108 00:07:27.373 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:27.373 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:27.373 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.373 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.373 #2 INITED exec/s: 0 rss: 66Mb 00:07:27.373 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.373 This may also happen if the target rejected all inputs we tried so far 00:07:27.633 [2024-12-09 13:14:29.635128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.633 [2024-12-09 13:14:29.635158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.633 [2024-12-09 13:14:29.635207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.633 [2024-12-09 13:14:29.635224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.634 [2024-12-09 13:14:29.635279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.634 [2024-12-09 13:14:29.635294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.634 [2024-12-09 13:14:29.635349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:27.634 [2024-12-09 13:14:29.635363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.634 [2024-12-09 13:14:29.635422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:27.634 [2024-12-09 13:14:29.635438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.894 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:27.894 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.894 #4 NEW cov: 12158 ft: 12150 corp: 2/26b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:27.894 [2024-12-09 13:14:29.966290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.894 [2024-12-09 13:14:29.966363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.894 [2024-12-09 13:14:29.966469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.894 [2024-12-09 13:14:29.966507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.894 [2024-12-09 13:14:29.966607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.894 [2024-12-09 13:14:29.966645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:29.966740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:27.895 [2024-12-09 13:14:29.966776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:29.966882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:27.895 [2024-12-09 13:14:29.966919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.895 #5 NEW cov: 12288 ft: 12910 corp: 3/51b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:27.895 [2024-12-09 13:14:30.036070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.895 [2024-12-09 13:14:30.036101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.036162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.895 [2024-12-09 13:14:30.036179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.036235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.895 [2024-12-09 13:14:30.036251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.036306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:27.895 [2024-12-09 13:14:30.036321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.036376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:27.895 [2024-12-09 13:14:30.036392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.895 #6 NEW cov: 12294 ft: 13188 corp: 4/76b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 CopyPart- 00:07:27.895 [2024-12-09 13:14:30.075982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.895 [2024-12-09 13:14:30.076008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.076062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.895 [2024-12-09 13:14:30.076078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.076134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.895 [2024-12-09 13:14:30.076150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.895 #8 NEW cov: 12379 ft: 13921 corp: 5/93b lim: 25 exec/s: 0 rss: 72Mb L: 17/25 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:27.895 [2024-12-09 13:14:30.116093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.895 [2024-12-09 13:14:30.116120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.116159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.895 [2024-12-09 13:14:30.116175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.895 [2024-12-09 13:14:30.116232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.895 [2024-12-09 13:14:30.116248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.154 #9 NEW cov: 12379 ft: 14006 corp: 6/111b lim: 25 exec/s: 0 rss: 72Mb L: 18/25 MS: 1 CrossOver- 00:07:28.154 [2024-12-09 13:14:30.176521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.154 [2024-12-09 13:14:30.176551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.154 [2024-12-09 13:14:30.176603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.155 [2024-12-09 13:14:30.176635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.176690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.155 [2024-12-09 13:14:30.176705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.176758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.155 [2024-12-09 13:14:30.176774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.176828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.155 [2024-12-09 13:14:30.176843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.155 #20 NEW cov: 12379 ft: 14097 corp: 7/136b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:28.155 [2024-12-09 13:14:30.216491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.155 [2024-12-09 13:14:30.216518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.216565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.155 [2024-12-09 13:14:30.216581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.216641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.155 [2024-12-09 13:14:30.216657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.216713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.155 [2024-12-09 13:14:30.216727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.155 #21 NEW cov: 12379 ft: 14123 corp: 8/157b lim: 25 exec/s: 0 rss: 73Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:07:28.155 [2024-12-09 13:14:30.276768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.155 [2024-12-09 13:14:30.276797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.276846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.155 [2024-12-09 13:14:30.276861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.276917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.155 [2024-12-09 13:14:30.276933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.276990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.155 [2024-12-09 13:14:30.277006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.277067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.155 [2024-12-09 13:14:30.277083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.155 #22 NEW cov: 12379 ft: 14188 corp: 9/182b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:07:28.155 [2024-12-09 13:14:30.336910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.155 [2024-12-09 13:14:30.336938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.337009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.155 [2024-12-09 13:14:30.337026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.337084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.155 [2024-12-09 13:14:30.337099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.337153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.155 [2024-12-09 13:14:30.337169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.337225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.155 [2024-12-09 13:14:30.337241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.155 #23 NEW cov: 12379 ft: 14200 corp: 10/207b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:28.155 [2024-12-09 13:14:30.397098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.155 [2024-12-09 13:14:30.397125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.397182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.155 [2024-12-09 13:14:30.397199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.397253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.155 [2024-12-09 13:14:30.397268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.397323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.155 [2024-12-09 13:14:30.397340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.155 [2024-12-09 13:14:30.397394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.155 [2024-12-09 13:14:30.397410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.415 #24 NEW cov: 12379 ft: 14264 corp: 11/232b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:07:28.415 [2024-12-09 13:14:30.437201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.415 [2024-12-09 13:14:30.437228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.437281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.415 [2024-12-09 13:14:30.437298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.437351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.415 [2024-12-09 13:14:30.437370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.437425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.415 [2024-12-09 13:14:30.437440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.437497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.415 [2024-12-09 13:14:30.437513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.415 #25 NEW cov: 12379 ft: 14283 corp: 12/257b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:28.415 [2024-12-09 13:14:30.497396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.415 [2024-12-09 13:14:30.497424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.497475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.415 [2024-12-09 13:14:30.497490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.497545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.415 [2024-12-09 13:14:30.497559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.497614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.415 [2024-12-09 13:14:30.497630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.497685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.415 [2024-12-09 13:14:30.497701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.415 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:28.415 #26 NEW cov: 12402 ft: 14313 corp: 13/282b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:28.415 [2024-12-09 13:14:30.557569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.415 [2024-12-09 13:14:30.557603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.557660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.415 [2024-12-09 13:14:30.557676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.557732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.415 [2024-12-09 13:14:30.557749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.557803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.415 [2024-12-09 13:14:30.557818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.557876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.415 [2024-12-09 13:14:30.557893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.415 #27 NEW cov: 12402 ft: 14408 corp: 14/307b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:07:28.415 [2024-12-09 13:14:30.597159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.415 [2024-12-09 13:14:30.597186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.415 #30 NEW cov: 12402 ft: 14845 corp: 15/314b lim: 25 exec/s: 30 rss: 73Mb L: 7/25 MS: 3 ChangeByte-InsertByte-CrossOver- 00:07:28.415 [2024-12-09 13:14:30.637729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.415 [2024-12-09 13:14:30.637755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.637811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.415 [2024-12-09 13:14:30.637826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.637879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.415 [2024-12-09 13:14:30.637894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.637947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.415 [2024-12-09 13:14:30.637962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.415 [2024-12-09 13:14:30.638016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.415 [2024-12-09 13:14:30.638032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.415 #31 NEW cov: 12402 ft: 14855 corp: 16/339b lim: 25 exec/s: 31 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:28.674 [2024-12-09 13:14:30.677618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.674 [2024-12-09 13:14:30.677645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.677699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.674 [2024-12-09 13:14:30.677715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.677770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.674 [2024-12-09 13:14:30.677786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.674 #32 NEW cov: 12402 ft: 14945 corp: 17/354b lim: 25 exec/s: 32 rss: 73Mb L: 15/25 MS: 1 EraseBytes- 00:07:28.674 [2024-12-09 13:14:30.738014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.674 [2024-12-09 13:14:30.738041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.738112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.674 [2024-12-09 13:14:30.738128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.738182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.674 [2024-12-09 13:14:30.738198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.738254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.674 [2024-12-09 13:14:30.738270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.738332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.674 [2024-12-09 13:14:30.738346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.674 #33 NEW cov: 12402 ft: 14958 corp: 18/379b lim: 25 exec/s: 33 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:07:28.674 [2024-12-09 13:14:30.798047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.674 [2024-12-09 13:14:30.798074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.798125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.674 [2024-12-09 13:14:30.798140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.798196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.674 [2024-12-09 13:14:30.798212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.798267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.674 [2024-12-09 13:14:30.798281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.674 #34 NEW cov: 12402 ft: 15029 corp: 19/400b lim: 25 exec/s: 34 rss: 73Mb L: 21/25 MS: 1 CopyPart- 00:07:28.674 [2024-12-09 13:14:30.858348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.674 [2024-12-09 13:14:30.858375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.858429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.674 [2024-12-09 13:14:30.858444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.858497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.674 [2024-12-09 13:14:30.858513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.858567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.674 [2024-12-09 13:14:30.858582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.674 [2024-12-09 13:14:30.858659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.674 [2024-12-09 13:14:30.858676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.674 #35 NEW cov: 12402 ft: 15046 corp: 20/425b lim: 25 exec/s: 35 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:28.932 [2024-12-09 13:14:30.918517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.933 [2024-12-09 13:14:30.918545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.918606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.933 [2024-12-09 13:14:30.918622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.918680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.933 [2024-12-09 13:14:30.918695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.918752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.933 [2024-12-09 13:14:30.918768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.918821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.933 [2024-12-09 13:14:30.918836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.933 #36 NEW cov: 12402 ft: 15048 corp: 21/450b lim: 25 exec/s: 36 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:28.933 [2024-12-09 13:14:30.978564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.933 [2024-12-09 13:14:30.978593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.978646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.933 [2024-12-09 13:14:30.978661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.978719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.933 [2024-12-09 13:14:30.978734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:30.978790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.933 [2024-12-09 13:14:30.978805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.933 #37 NEW cov: 12402 ft: 15065 corp: 22/474b lim: 25 exec/s: 37 rss: 74Mb L: 24/25 MS: 1 EraseBytes- 00:07:28.933 [2024-12-09 13:14:31.038739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.933 [2024-12-09 13:14:31.038765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.038814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.933 [2024-12-09 13:14:31.038830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.038886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.933 [2024-12-09 13:14:31.038902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.038956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.933 [2024-12-09 13:14:31.038972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.933 #38 NEW cov: 12402 ft: 15073 corp: 23/497b lim: 25 exec/s: 38 rss: 74Mb L: 23/25 MS: 1 CMP- DE: "\000\000"- 00:07:28.933 [2024-12-09 13:14:31.098779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.933 [2024-12-09 13:14:31.098805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.098852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.933 [2024-12-09 13:14:31.098867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.098937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.933 [2024-12-09 13:14:31.098953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.933 #39 NEW cov: 12402 ft: 15092 corp: 24/512b lim: 25 exec/s: 39 rss: 74Mb L: 15/25 MS: 1 ChangeByte- 00:07:28.933 [2024-12-09 13:14:31.159170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.933 [2024-12-09 13:14:31.159196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.159268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.933 [2024-12-09 13:14:31.159283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.159337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.933 [2024-12-09 13:14:31.159353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.159409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.933 [2024-12-09 13:14:31.159423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.933 [2024-12-09 13:14:31.159480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.933 [2024-12-09 13:14:31.159495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.192 #40 NEW cov: 12402 ft: 15111 corp: 25/537b lim: 25 exec/s: 40 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:29.192 [2024-12-09 13:14:31.219345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.192 [2024-12-09 13:14:31.219372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.219443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.192 [2024-12-09 13:14:31.219459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.219513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.192 [2024-12-09 13:14:31.219529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.219582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.192 [2024-12-09 13:14:31.219602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.219668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.192 [2024-12-09 13:14:31.219683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.192 #41 NEW cov: 12402 ft: 15124 corp: 26/562b lim: 25 exec/s: 41 rss: 74Mb L: 25/25 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:29.192 [2024-12-09 13:14:31.259314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.192 [2024-12-09 13:14:31.259341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.259391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.192 [2024-12-09 13:14:31.259407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.259461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.192 [2024-12-09 13:14:31.259479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.259536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.192 [2024-12-09 13:14:31.259551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.192 #42 NEW cov: 12402 ft: 15133 corp: 27/583b lim: 25 exec/s: 42 rss: 74Mb L: 21/25 MS: 1 CopyPart- 00:07:29.192 [2024-12-09 13:14:31.299416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.192 [2024-12-09 13:14:31.299442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.299489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.192 [2024-12-09 13:14:31.299505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.299559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.192 [2024-12-09 13:14:31.299573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.299630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.192 [2024-12-09 13:14:31.299646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.192 #43 NEW cov: 12402 ft: 15139 corp: 28/607b lim: 25 exec/s: 43 rss: 74Mb L: 24/25 MS: 1 EraseBytes- 00:07:29.192 [2024-12-09 13:14:31.359522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.192 [2024-12-09 13:14:31.359548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.359609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.192 [2024-12-09 13:14:31.359626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.192 [2024-12-09 13:14:31.359684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.192 [2024-12-09 13:14:31.359700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.192 #44 NEW cov: 12402 ft: 15150 corp: 29/624b lim: 25 exec/s: 44 rss: 74Mb L: 17/25 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:29.192 [2024-12-09 13:14:31.399849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.192 [2024-12-09 13:14:31.399874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.193 [2024-12-09 13:14:31.399947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.193 [2024-12-09 13:14:31.399962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.193 [2024-12-09 13:14:31.400016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.193 [2024-12-09 13:14:31.400032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.193 [2024-12-09 13:14:31.400089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.193 [2024-12-09 13:14:31.400103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.193 [2024-12-09 13:14:31.400159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.193 [2024-12-09 13:14:31.400178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.193 #45 NEW cov: 12402 ft: 15157 corp: 30/649b lim: 25 exec/s: 45 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:29.500 [2024-12-09 13:14:31.439988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.500 [2024-12-09 13:14:31.440016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.440073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.500 [2024-12-09 13:14:31.440087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.440143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.500 [2024-12-09 13:14:31.440159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.440214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.500 [2024-12-09 13:14:31.440230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.440287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.500 [2024-12-09 13:14:31.440302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.500 #46 NEW cov: 12402 ft: 15168 corp: 31/674b lim: 25 exec/s: 46 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:07:29.500 [2024-12-09 13:14:31.499679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.500 [2024-12-09 13:14:31.499705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.500 #47 NEW cov: 12402 ft: 15181 corp: 32/683b lim: 25 exec/s: 47 rss: 74Mb L: 9/25 MS: 1 EraseBytes- 00:07:29.500 [2024-12-09 13:14:31.540048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.500 [2024-12-09 13:14:31.540076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.540113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.500 [2024-12-09 13:14:31.540128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.540186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.500 [2024-12-09 13:14:31.540202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.500 #48 NEW cov: 12402 ft: 15196 corp: 33/701b lim: 25 exec/s: 48 rss: 74Mb L: 18/25 MS: 1 EraseBytes- 00:07:29.500 [2024-12-09 13:14:31.600087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.500 [2024-12-09 13:14:31.600113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.500 [2024-12-09 13:14:31.600152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.500 [2024-12-09 13:14:31.600168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.500 #49 NEW cov: 12402 ft: 15415 corp: 34/713b lim: 25 exec/s: 24 rss: 75Mb L: 12/25 MS: 1 InsertRepeatedBytes- 00:07:29.500 #49 DONE cov: 12402 ft: 15415 corp: 34/713b lim: 25 exec/s: 24 rss: 75Mb 00:07:29.500 ###### Recommended dictionary. ###### 00:07:29.500 "\000\000" # Uses: 2 00:07:29.500 ###### End of recommended dictionary. ###### 00:07:29.500 Done 49 runs in 2 second(s) 00:07:29.500 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.759 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:29.759 [2024-12-09 13:14:31.793514] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:29.759 [2024-12-09 13:14:31.793581] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305060 ] 00:07:29.759 [2024-12-09 13:14:32.001859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.018 [2024-12-09 13:14:32.036049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.018 [2024-12-09 13:14:32.095111] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.018 [2024-12-09 13:14:32.111426] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:30.018 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.018 INFO: Seed: 161125173 00:07:30.018 INFO: Loaded 1 modules (390626 inline 8-bit counters): 390626 [0x2c83f0c, 0x2ce34ee), 00:07:30.018 INFO: Loaded 1 PC tables (390626 PCs): 390626 [0x2ce34f0,0x32d9310), 00:07:30.018 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:30.018 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.018 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.018 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.018 This may also happen if the target rejected all inputs we tried so far 00:07:30.018 [2024-12-09 13:14:32.181876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.018 [2024-12-09 13:14:32.181922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.018 [2024-12-09 13:14:32.182063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.018 [2024-12-09 13:14:32.182095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.018 [2024-12-09 13:14:32.182236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.018 [2024-12-09 13:14:32.182261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.018 [2024-12-09 13:14:32.182404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.018 [2024-12-09 13:14:32.182436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.276 NEW_FUNC[1/718]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:30.277 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.277 #4 NEW cov: 12248 ft: 12245 corp: 2/86b lim: 100 exec/s: 0 rss: 72Mb L: 85/85 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:30.536 [2024-12-09 13:14:32.531734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.531798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.531889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.531920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.531999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.532028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.532107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.532137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.536 #10 NEW cov: 12361 ft: 12894 corp: 3/171b lim: 100 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 ChangeBit- 00:07:30.536 [2024-12-09 13:14:32.601238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.601268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.601315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.601332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.536 #11 NEW cov: 12367 ft: 13552 corp: 4/215b lim: 100 exec/s: 0 rss: 72Mb L: 44/85 MS: 1 EraseBytes- 00:07:30.536 [2024-12-09 13:14:32.641556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.641584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.641632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.641647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.641685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.641701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.641754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.641769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.536 #12 NEW cov: 12452 ft: 13935 corp: 5/309b lim: 100 exec/s: 0 rss: 72Mb L: 94/94 MS: 1 CopyPart- 00:07:30.536 [2024-12-09 13:14:32.681716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.681743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.681791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.681806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.681860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.681875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.681926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.681943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.536 #13 NEW cov: 12452 ft: 14024 corp: 6/403b lim: 100 exec/s: 0 rss: 72Mb L: 94/94 MS: 1 ChangeBinInt- 00:07:30.536 [2024-12-09 13:14:32.742045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.742071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.742135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.536 [2024-12-09 13:14:32.742151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.536 [2024-12-09 13:14:32.742204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.537 [2024-12-09 13:14:32.742221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.537 [2024-12-09 13:14:32.742274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.537 [2024-12-09 13:14:32.742290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.537 [2024-12-09 13:14:32.742346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.537 [2024-12-09 13:14:32.742362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.796 #14 NEW cov: 12452 ft: 14194 corp: 7/503b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:30.796 [2024-12-09 13:14:32.802039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.802066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.802128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.802144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.802200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.802215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.802268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073693102335 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.802284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.796 #15 NEW cov: 12452 ft: 14259 corp: 8/597b lim: 100 exec/s: 0 rss: 72Mb L: 94/100 MS: 1 ChangeBinInt- 00:07:30.796 [2024-12-09 13:14:32.842135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.842161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.842224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.842241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.842296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.842311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.842363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073693102335 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.842379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.796 #16 NEW cov: 12452 ft: 14313 corp: 9/691b lim: 100 exec/s: 0 rss: 72Mb L: 94/100 MS: 1 ChangeBinInt- 00:07:30.796 [2024-12-09 13:14:32.902307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.902335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.902381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.902397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.902454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.902470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.902523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.902539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.796 #17 NEW cov: 12452 ft: 14329 corp: 10/785b lim: 100 exec/s: 0 rss: 72Mb L: 94/100 MS: 1 ChangeByte- 00:07:30.796 [2024-12-09 13:14:32.942386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.942414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.942477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.942492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.942547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.942563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.942620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.942636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.796 #18 NEW cov: 12452 ft: 14405 corp: 11/883b lim: 100 exec/s: 0 rss: 72Mb L: 98/100 MS: 1 CrossOver- 00:07:30.796 [2024-12-09 13:14:32.982526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.982553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.982620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.982637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.982701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.982717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.796 [2024-12-09 13:14:32.982769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.796 [2024-12-09 13:14:32.982785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.796 #19 NEW cov: 12452 ft: 14466 corp: 12/968b lim: 100 exec/s: 0 rss: 72Mb L: 85/100 MS: 1 ShuffleBytes- 00:07:31.055 [2024-12-09 13:14:33.042789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.055 [2024-12-09 13:14:33.042819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.055 [2024-12-09 13:14:33.042860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.055 [2024-12-09 13:14:33.042876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.055 [2024-12-09 13:14:33.042928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.055 [2024-12-09 13:14:33.042943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.055 [2024-12-09 13:14:33.042997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.055 [2024-12-09 13:14:33.043014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.055 NEW_FUNC[1/1]: 0x1c562e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:31.055 #20 NEW cov: 12475 ft: 14506 corp: 13/1065b lim: 100 exec/s: 0 rss: 72Mb L: 97/100 MS: 1 InsertRepeatedBytes- 00:07:31.055 [2024-12-09 13:14:33.082841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.082868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.082909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.082925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.082977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.083009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.083060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.083076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.056 #21 NEW cov: 12475 ft: 14562 corp: 14/1159b lim: 100 exec/s: 0 rss: 72Mb L: 94/100 MS: 1 ChangeBit- 00:07:31.056 [2024-12-09 13:14:33.122987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.123015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.123056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.123072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.123123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.123139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.123194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.123212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.056 #22 NEW cov: 12475 ft: 14604 corp: 15/1244b lim: 100 exec/s: 0 rss: 72Mb L: 85/100 MS: 1 CopyPart- 00:07:31.056 [2024-12-09 13:14:33.162760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.162788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.162842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.162856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.056 #23 NEW cov: 12475 ft: 14618 corp: 16/1288b lim: 100 exec/s: 23 rss: 73Mb L: 44/100 MS: 1 ShuffleBytes- 00:07:31.056 [2024-12-09 13:14:33.223246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.223273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.223321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.223337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.223391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.223406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.223459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:360569445166350335 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.223474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.056 #24 NEW cov: 12475 ft: 14691 corp: 17/1385b lim: 100 exec/s: 24 rss: 73Mb L: 97/100 MS: 1 InsertRepeatedBytes- 00:07:31.056 [2024-12-09 13:14:33.263345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.263372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.263435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.263451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.263504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.263519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.056 [2024-12-09 13:14:33.263573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.056 [2024-12-09 13:14:33.263592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.056 #25 NEW cov: 12475 ft: 14719 corp: 18/1475b lim: 100 exec/s: 25 rss: 73Mb L: 90/100 MS: 1 CrossOver- 00:07:31.314 [2024-12-09 13:14:33.303462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.303490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.303528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.303545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.303597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.303614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.303671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.303689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.314 #26 NEW cov: 12475 ft: 14760 corp: 19/1560b lim: 100 exec/s: 26 rss: 73Mb L: 85/100 MS: 1 ChangeBinInt- 00:07:31.314 [2024-12-09 13:14:33.363627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.363656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.363718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.363735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.363790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.363806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.363861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.363877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.314 #27 NEW cov: 12475 ft: 14772 corp: 20/1646b lim: 100 exec/s: 27 rss: 73Mb L: 86/100 MS: 1 CopyPart- 00:07:31.314 [2024-12-09 13:14:33.403613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.403641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.403689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743966335369215 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.403704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.403759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073288476390 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.403777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 #28 NEW cov: 12475 ft: 15056 corp: 21/1707b lim: 100 exec/s: 28 rss: 73Mb L: 61/100 MS: 1 InsertRepeatedBytes- 00:07:31.314 [2024-12-09 13:14:33.463917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.463944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.463990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.464006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.464057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.464072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.464125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.464140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.314 #29 NEW cov: 12475 ft: 15063 corp: 22/1801b lim: 100 exec/s: 29 rss: 73Mb L: 94/100 MS: 1 CrossOver- 00:07:31.314 [2024-12-09 13:14:33.504007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.504035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.504096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.504111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.504165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.504181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.504235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.504250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.314 #30 NEW cov: 12475 ft: 15123 corp: 23/1886b lim: 100 exec/s: 30 rss: 73Mb L: 85/100 MS: 1 ChangeBit- 00:07:31.314 [2024-12-09 13:14:33.544124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.544152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.544197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.544213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.544265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.544282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.314 [2024-12-09 13:14:33.544339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.314 [2024-12-09 13:14:33.544355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.574 #31 NEW cov: 12475 ft: 15140 corp: 24/1980b lim: 100 exec/s: 31 rss: 73Mb L: 94/100 MS: 1 ChangeBinInt- 00:07:31.574 [2024-12-09 13:14:33.604297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.604325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.604363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.604380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.604433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.604451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.604503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.604519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.574 #32 NEW cov: 12475 ft: 15145 corp: 25/2065b lim: 100 exec/s: 32 rss: 73Mb L: 85/100 MS: 1 ChangeBit- 00:07:31.574 [2024-12-09 13:14:33.644377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.644405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.644449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.644465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.644520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.644553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.644611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.644627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.574 #38 NEW cov: 12475 ft: 15193 corp: 26/2154b lim: 100 exec/s: 38 rss: 73Mb L: 89/100 MS: 1 CrossOver- 00:07:31.574 [2024-12-09 13:14:33.684524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.684552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.684599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.684618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.684674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.684690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.684743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.684759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.574 #39 NEW cov: 12475 ft: 15200 corp: 27/2253b lim: 100 exec/s: 39 rss: 73Mb L: 99/100 MS: 1 CrossOver- 00:07:31.574 [2024-12-09 13:14:33.744728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.744756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.744799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.744814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.744866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.744882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.744936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709487360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.744951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.574 #40 NEW cov: 12475 ft: 15210 corp: 28/2348b lim: 100 exec/s: 40 rss: 73Mb L: 95/100 MS: 1 CrossOver- 00:07:31.574 [2024-12-09 13:14:33.784806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.784833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.784879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.784895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.784948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.784964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.574 [2024-12-09 13:14:33.785018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.574 [2024-12-09 13:14:33.785033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.835 #41 NEW cov: 12475 ft: 15230 corp: 29/2447b lim: 100 exec/s: 41 rss: 73Mb L: 99/100 MS: 1 ChangeByte- 00:07:31.835 [2024-12-09 13:14:33.844980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.845011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.845065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709501439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.845081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.845136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.845152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.845205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709487360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.845222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.835 #42 NEW cov: 12475 ft: 15233 corp: 30/2542b lim: 100 exec/s: 42 rss: 73Mb L: 95/100 MS: 1 ChangeByte- 00:07:31.835 [2024-12-09 13:14:33.905164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.905192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.905236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.905252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.905306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.905338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.905392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.905407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.835 #43 NEW cov: 12475 ft: 15236 corp: 31/2641b lim: 100 exec/s: 43 rss: 73Mb L: 99/100 MS: 1 ChangeBinInt- 00:07:31.835 [2024-12-09 13:14:33.965328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.965355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.965416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.965431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.965486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.965502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:33.965555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743021442564095 len:33280 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:33.965575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.835 #44 NEW cov: 12475 ft: 15248 corp: 32/2740b lim: 100 exec/s: 44 rss: 73Mb L: 99/100 MS: 1 ChangeByte- 00:07:31.835 [2024-12-09 13:14:34.005412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.005438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.005502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.005518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.005571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.005591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.005647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073693102335 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.005673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.835 #45 NEW cov: 12475 ft: 15276 corp: 33/2834b lim: 100 exec/s: 45 rss: 73Mb L: 94/100 MS: 1 ChangeBit- 00:07:31.835 [2024-12-09 13:14:34.045510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.045537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.045606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709501439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.045622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.045687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.045703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.835 [2024-12-09 13:14:34.045757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709487360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.835 [2024-12-09 13:14:34.045772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.095 #46 NEW cov: 12475 ft: 15282 corp: 34/2929b lim: 100 exec/s: 46 rss: 74Mb L: 95/100 MS: 1 ShuffleBytes- 00:07:32.095 [2024-12-09 13:14:34.105671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.105699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.105744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65504 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.105759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.105811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.105829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.105881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446528569430507519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.105897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.095 #47 NEW cov: 12475 ft: 15286 corp: 35/3014b lim: 100 exec/s: 47 rss: 74Mb L: 85/100 MS: 1 ChangeByte- 00:07:32.095 [2024-12-09 13:14:34.165851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.165878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.165941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.165957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.166012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073693429759 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.166028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.095 [2024-12-09 13:14:34.166079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.095 [2024-12-09 13:14:34.166095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.095 #48 NEW cov: 12475 ft: 15288 corp: 36/3108b lim: 100 exec/s: 24 rss: 74Mb L: 94/100 MS: 1 ChangeBinInt- 00:07:32.096 #48 DONE cov: 12475 ft: 15288 corp: 36/3108b lim: 100 exec/s: 24 rss: 74Mb 00:07:32.096 Done 48 runs in 2 second(s) 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:32.096 00:07:32.096 real 1m3.829s 00:07:32.096 user 1m40.103s 00:07:32.096 sys 0m7.489s 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.096 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.096 ************************************ 00:07:32.096 END TEST nvmf_llvm_fuzz 00:07:32.096 ************************************ 00:07:32.355 13:14:34 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:32.355 13:14:34 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:32.355 13:14:34 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:32.355 13:14:34 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.355 13:14:34 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.355 13:14:34 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.355 ************************************ 00:07:32.355 START TEST vfio_llvm_fuzz 00:07:32.355 ************************************ 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:32.355 * Looking for test storage... 00:07:32.355 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.355 --rc genhtml_branch_coverage=1 00:07:32.355 --rc genhtml_function_coverage=1 00:07:32.355 --rc genhtml_legend=1 00:07:32.355 --rc geninfo_all_blocks=1 00:07:32.355 --rc geninfo_unexecuted_blocks=1 00:07:32.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.355 ' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.355 --rc genhtml_branch_coverage=1 00:07:32.355 --rc genhtml_function_coverage=1 00:07:32.355 --rc genhtml_legend=1 00:07:32.355 --rc geninfo_all_blocks=1 00:07:32.355 --rc geninfo_unexecuted_blocks=1 00:07:32.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.355 ' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.355 --rc genhtml_branch_coverage=1 00:07:32.355 --rc genhtml_function_coverage=1 00:07:32.355 --rc genhtml_legend=1 00:07:32.355 --rc geninfo_all_blocks=1 00:07:32.355 --rc geninfo_unexecuted_blocks=1 00:07:32.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.355 ' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.355 --rc genhtml_branch_coverage=1 00:07:32.355 --rc genhtml_function_coverage=1 00:07:32.355 --rc genhtml_legend=1 00:07:32.355 --rc geninfo_all_blocks=1 00:07:32.355 --rc geninfo_unexecuted_blocks=1 00:07:32.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.355 ' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:32.355 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.619 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.619 #define SPDK_CONFIG_H 00:07:32.619 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:32.619 #define SPDK_CONFIG_APPS 1 00:07:32.619 #define SPDK_CONFIG_ARCH native 00:07:32.619 #undef SPDK_CONFIG_ASAN 00:07:32.619 #undef SPDK_CONFIG_AVAHI 00:07:32.619 #undef SPDK_CONFIG_CET 00:07:32.619 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:32.619 #define SPDK_CONFIG_COVERAGE 1 00:07:32.619 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.619 #undef SPDK_CONFIG_CRYPTO 00:07:32.619 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.619 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.619 #undef SPDK_CONFIG_DAOS 00:07:32.619 #define SPDK_CONFIG_DAOS_DIR 00:07:32.619 #define SPDK_CONFIG_DEBUG 1 00:07:32.619 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.619 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.619 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:32.619 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:32.619 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.619 #undef SPDK_CONFIG_DPDK_UADK 00:07:32.619 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.619 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.619 #undef SPDK_CONFIG_FC 00:07:32.619 #define SPDK_CONFIG_FC_PATH 00:07:32.619 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.619 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.619 #define SPDK_CONFIG_FSDEV 1 00:07:32.619 #undef SPDK_CONFIG_FUSE 00:07:32.619 #define SPDK_CONFIG_FUZZER 1 00:07:32.619 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.619 #undef SPDK_CONFIG_GOLANG 00:07:32.619 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.619 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:32.619 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.619 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:32.619 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.619 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.619 #undef SPDK_CONFIG_HAVE_LZ4 00:07:32.619 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:32.619 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:32.619 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.619 #define SPDK_CONFIG_IDXD 1 00:07:32.619 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.619 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.619 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.619 #define SPDK_CONFIG_ISAL 1 00:07:32.619 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.619 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.619 #define SPDK_CONFIG_LIBDIR 00:07:32.619 #undef SPDK_CONFIG_LTO 00:07:32.619 #define SPDK_CONFIG_MAX_LCORES 128 00:07:32.619 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:32.619 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.619 #undef SPDK_CONFIG_OCF 00:07:32.619 #define SPDK_CONFIG_OCF_PATH 00:07:32.619 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.619 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.619 #define SPDK_CONFIG_PGO_DIR 00:07:32.619 #undef SPDK_CONFIG_PGO_USE 00:07:32.619 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.619 #undef SPDK_CONFIG_RAID5F 00:07:32.619 #undef SPDK_CONFIG_RBD 00:07:32.619 #define SPDK_CONFIG_RDMA 1 00:07:32.619 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.619 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.619 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.619 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.619 #undef SPDK_CONFIG_SHARED 00:07:32.619 #undef SPDK_CONFIG_SMA 00:07:32.619 #define SPDK_CONFIG_TESTS 1 00:07:32.619 #undef SPDK_CONFIG_TSAN 00:07:32.619 #define SPDK_CONFIG_UBLK 1 00:07:32.619 #define SPDK_CONFIG_UBSAN 1 00:07:32.619 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.619 #undef SPDK_CONFIG_URING 00:07:32.619 #define SPDK_CONFIG_URING_PATH 00:07:32.619 #undef SPDK_CONFIG_URING_ZNS 00:07:32.619 #undef SPDK_CONFIG_USDT 00:07:32.619 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.619 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.619 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.619 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.620 #define SPDK_CONFIG_VHOST 1 00:07:32.620 #define SPDK_CONFIG_VIRTIO 1 00:07:32.620 #undef SPDK_CONFIG_VTUNE 00:07:32.620 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.620 #define SPDK_CONFIG_WERROR 1 00:07:32.620 #define SPDK_CONFIG_WPDK_DIR 00:07:32.620 #undef SPDK_CONFIG_XNVME 00:07:32.620 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.620 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 305623 ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 305623 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HYO3nZ 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.HYO3nZ/tests/vfio /tmp/spdk.HYO3nZ 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=58415050752 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67015417856 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8600367104 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=33504280576 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507708928 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=13397094400 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=13403086848 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=33507393536 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507708928 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6701527040 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6701539328 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:32.621 * Looking for test storage... 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=58415050752 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.621 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10814959616 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.622 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.622 --rc genhtml_branch_coverage=1 00:07:32.622 --rc genhtml_function_coverage=1 00:07:32.622 --rc genhtml_legend=1 00:07:32.622 --rc geninfo_all_blocks=1 00:07:32.622 --rc geninfo_unexecuted_blocks=1 00:07:32.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.622 ' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.622 --rc genhtml_branch_coverage=1 00:07:32.622 --rc genhtml_function_coverage=1 00:07:32.622 --rc genhtml_legend=1 00:07:32.622 --rc geninfo_all_blocks=1 00:07:32.622 --rc geninfo_unexecuted_blocks=1 00:07:32.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.622 ' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.622 --rc genhtml_branch_coverage=1 00:07:32.622 --rc genhtml_function_coverage=1 00:07:32.622 --rc genhtml_legend=1 00:07:32.622 --rc geninfo_all_blocks=1 00:07:32.622 --rc geninfo_unexecuted_blocks=1 00:07:32.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.622 ' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.622 --rc genhtml_branch_coverage=1 00:07:32.622 --rc genhtml_function_coverage=1 00:07:32.622 --rc genhtml_legend=1 00:07:32.622 --rc geninfo_all_blocks=1 00:07:32.622 --rc geninfo_unexecuted_blocks=1 00:07:32.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.622 ' 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:32.622 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:32.882 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.882 13:14:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:32.882 [2024-12-09 13:14:34.910187] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:32.882 [2024-12-09 13:14:34.910260] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305691 ] 00:07:32.882 [2024-12-09 13:14:35.004123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.882 [2024-12-09 13:14:35.043406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.141 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.141 INFO: Seed: 3268139091 00:07:33.141 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:33.141 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:33.141 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:33.141 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.141 #2 INITED exec/s: 0 rss: 67Mb 00:07:33.141 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.141 This may also happen if the target rejected all inputs we tried so far 00:07:33.141 [2024-12-09 13:14:35.286286] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:33.657 NEW_FUNC[1/676]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:33.657 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.657 #15 NEW cov: 11241 ft: 11209 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:33.916 #16 NEW cov: 11269 ft: 14943 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:34.175 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:34.175 #22 NEW cov: 11286 ft: 16644 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CMP- DE: "\001\001"- 00:07:34.175 #23 NEW cov: 11296 ft: 17522 corp: 5/25b lim: 6 exec/s: 23 rss: 74Mb L: 6/6 MS: 1 PersAutoDict- DE: "\001\001"- 00:07:34.433 #24 NEW cov: 11296 ft: 17776 corp: 6/31b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:34.692 #25 NEW cov: 11296 ft: 18064 corp: 7/37b lim: 6 exec/s: 25 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:07:34.950 #26 NEW cov: 11296 ft: 18358 corp: 8/43b lim: 6 exec/s: 26 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:35.210 #27 NEW cov: 11303 ft: 18674 corp: 9/49b lim: 6 exec/s: 27 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:35.210 #28 NEW cov: 11303 ft: 19011 corp: 10/55b lim: 6 exec/s: 14 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:35.210 #28 DONE cov: 11303 ft: 19011 corp: 10/55b lim: 6 exec/s: 14 rss: 75Mb 00:07:35.210 ###### Recommended dictionary. ###### 00:07:35.210 "\001\001" # Uses: 1 00:07:35.210 ###### End of recommended dictionary. ###### 00:07:35.210 Done 28 runs in 2 second(s) 00:07:35.210 [2024-12-09 13:14:37.427782] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:35.469 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.469 13:14:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:35.469 [2024-12-09 13:14:37.689600] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:35.469 [2024-12-09 13:14:37.689662] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306219 ] 00:07:35.728 [2024-12-09 13:14:37.783416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.728 [2024-12-09 13:14:37.823720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.987 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.987 INFO: Seed: 1753167904 00:07:35.987 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:35.987 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:35.987 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:35.987 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.987 #2 INITED exec/s: 0 rss: 67Mb 00:07:35.987 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.987 This may also happen if the target rejected all inputs we tried so far 00:07:35.987 [2024-12-09 13:14:38.067198] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:35.987 [2024-12-09 13:14:38.143549] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.987 [2024-12-09 13:14:38.143581] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.988 [2024-12-09 13:14:38.143609] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.506 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:36.506 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.506 #12 NEW cov: 11248 ft: 11216 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 CopyPart-ChangeByte-ChangeBit-InsertByte-CopyPart- 00:07:36.506 [2024-12-09 13:14:38.628674] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.506 [2024-12-09 13:14:38.628707] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.506 [2024-12-09 13:14:38.628725] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.506 #18 NEW cov: 11262 ft: 13461 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:36.765 [2024-12-09 13:14:38.813560] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.765 [2024-12-09 13:14:38.813584] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.765 [2024-12-09 13:14:38.813608] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.765 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:36.765 #19 NEW cov: 11282 ft: 13823 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:36.765 [2024-12-09 13:14:38.997164] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.765 [2024-12-09 13:14:38.997186] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.765 [2024-12-09 13:14:38.997204] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.023 #20 NEW cov: 11282 ft: 14705 corp: 5/17b lim: 4 exec/s: 20 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:37.023 [2024-12-09 13:14:39.172495] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.023 [2024-12-09 13:14:39.172518] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.023 [2024-12-09 13:14:39.172535] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.281 #26 NEW cov: 11282 ft: 15862 corp: 6/21b lim: 4 exec/s: 26 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:37.281 [2024-12-09 13:14:39.365686] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.281 [2024-12-09 13:14:39.365708] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.281 [2024-12-09 13:14:39.365725] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.281 #27 NEW cov: 11282 ft: 15900 corp: 7/25b lim: 4 exec/s: 27 rss: 74Mb L: 4/4 MS: 1 CMP- DE: "\377\377\377\017"- 00:07:37.540 [2024-12-09 13:14:39.549649] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.540 [2024-12-09 13:14:39.549672] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.540 [2024-12-09 13:14:39.549689] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.540 #28 NEW cov: 11282 ft: 16073 corp: 8/29b lim: 4 exec/s: 28 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:37.540 [2024-12-09 13:14:39.733524] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.540 [2024-12-09 13:14:39.733547] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.540 [2024-12-09 13:14:39.733564] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.800 #29 NEW cov: 11289 ft: 16654 corp: 9/33b lim: 4 exec/s: 29 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:37.800 [2024-12-09 13:14:39.911817] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.800 [2024-12-09 13:14:39.911840] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.800 [2024-12-09 13:14:39.911861] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.800 #30 NEW cov: 11289 ft: 16765 corp: 10/37b lim: 4 exec/s: 30 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:38.061 [2024-12-09 13:14:40.102853] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:38.061 [2024-12-09 13:14:40.102879] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:38.061 [2024-12-09 13:14:40.102898] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:38.061 #31 NEW cov: 11289 ft: 16889 corp: 11/41b lim: 4 exec/s: 15 rss: 75Mb L: 4/4 MS: 1 PersAutoDict- DE: "\377\377\377\017"- 00:07:38.061 #31 DONE cov: 11289 ft: 16889 corp: 11/41b lim: 4 exec/s: 15 rss: 75Mb 00:07:38.061 ###### Recommended dictionary. ###### 00:07:38.061 "\377\377\377\017" # Uses: 1 00:07:38.061 ###### End of recommended dictionary. ###### 00:07:38.061 Done 31 runs in 2 second(s) 00:07:38.061 [2024-12-09 13:14:40.227787] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:38.321 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.321 13:14:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:38.321 [2024-12-09 13:14:40.499200] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:38.321 [2024-12-09 13:14:40.499268] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306761 ] 00:07:38.580 [2024-12-09 13:14:40.593947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.580 [2024-12-09 13:14:40.633704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.580 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.580 INFO: Seed: 269213336 00:07:38.841 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:38.841 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:38.841 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.841 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.841 #2 INITED exec/s: 0 rss: 67Mb 00:07:38.841 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.841 This may also happen if the target rejected all inputs we tried so far 00:07:38.841 [2024-12-09 13:14:40.877875] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:38.841 [2024-12-09 13:14:40.952237] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.100 NEW_FUNC[1/677]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:39.101 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.101 #8 NEW cov: 11234 ft: 11206 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:39.360 [2024-12-09 13:14:41.436188] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.360 #9 NEW cov: 11248 ft: 14545 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:39.620 [2024-12-09 13:14:41.619278] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.621 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:39.621 #11 NEW cov: 11265 ft: 15477 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 2 InsertRepeatedBytes-InsertByte- 00:07:39.621 [2024-12-09 13:14:41.816539] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.881 #12 NEW cov: 11265 ft: 15845 corp: 5/33b lim: 8 exec/s: 12 rss: 75Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:39.881 [2024-12-09 13:14:42.002748] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.881 #13 NEW cov: 11265 ft: 16290 corp: 6/41b lim: 8 exec/s: 13 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:40.141 [2024-12-09 13:14:42.188307] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.141 #14 NEW cov: 11265 ft: 16592 corp: 7/49b lim: 8 exec/s: 14 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:40.142 [2024-12-09 13:14:42.374680] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.402 #15 NEW cov: 11265 ft: 16669 corp: 8/57b lim: 8 exec/s: 15 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:40.402 [2024-12-09 13:14:42.559741] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.663 #17 NEW cov: 11272 ft: 17292 corp: 9/65b lim: 8 exec/s: 17 rss: 76Mb L: 8/8 MS: 2 CrossOver-CrossOver- 00:07:40.663 [2024-12-09 13:14:42.741890] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.663 #18 NEW cov: 11272 ft: 17613 corp: 10/73b lim: 8 exec/s: 18 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.924 [2024-12-09 13:14:42.925034] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.924 #29 NEW cov: 11272 ft: 17679 corp: 11/81b lim: 8 exec/s: 14 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:40.924 #29 DONE cov: 11272 ft: 17679 corp: 11/81b lim: 8 exec/s: 14 rss: 76Mb 00:07:40.924 Done 29 runs in 2 second(s) 00:07:40.924 [2024-12-09 13:14:43.049787] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:41.185 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.185 13:14:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:41.185 [2024-12-09 13:14:43.319798] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:41.185 [2024-12-09 13:14:43.319865] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307297 ] 00:07:41.185 [2024-12-09 13:14:43.413939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.447 [2024-12-09 13:14:43.453966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.447 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.447 INFO: Seed: 3087198411 00:07:41.447 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:41.447 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:41.447 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:41.447 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.447 #2 INITED exec/s: 0 rss: 67Mb 00:07:41.447 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.447 This may also happen if the target rejected all inputs we tried so far 00:07:41.707 [2024-12-09 13:14:43.691977] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:41.968 NEW_FUNC[1/677]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:41.968 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:41.968 #91 NEW cov: 11242 ft: 11188 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 InsertRepeatedBytes-CrossOver-CopyPart-CopyPart- 00:07:42.229 #92 NEW cov: 11256 ft: 14720 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:42.490 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.490 #93 NEW cov: 11273 ft: 15506 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:42.490 #94 NEW cov: 11273 ft: 15790 corp: 5/129b lim: 32 exec/s: 94 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:42.751 #95 NEW cov: 11273 ft: 16155 corp: 6/161b lim: 32 exec/s: 95 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:43.012 #96 NEW cov: 11273 ft: 17210 corp: 7/193b lim: 32 exec/s: 96 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:43.271 #98 NEW cov: 11273 ft: 17602 corp: 8/225b lim: 32 exec/s: 98 rss: 74Mb L: 32/32 MS: 2 EraseBytes-InsertByte- 00:07:43.271 #104 NEW cov: 11280 ft: 17871 corp: 9/257b lim: 32 exec/s: 104 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:43.532 #105 NEW cov: 11280 ft: 18004 corp: 10/289b lim: 32 exec/s: 105 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:43.792 #111 NEW cov: 11280 ft: 18105 corp: 11/321b lim: 32 exec/s: 55 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:43.793 #111 DONE cov: 11280 ft: 18105 corp: 11/321b lim: 32 exec/s: 55 rss: 75Mb 00:07:43.793 Done 111 runs in 2 second(s) 00:07:43.793 [2024-12-09 13:14:45.841784] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:44.053 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:44.054 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:44.054 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:44.054 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.054 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:44.054 13:14:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:44.054 [2024-12-09 13:14:46.110321] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:44.054 [2024-12-09 13:14:46.110405] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307740 ] 00:07:44.054 [2024-12-09 13:14:46.203933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.054 [2024-12-09 13:14:46.244427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.314 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.314 INFO: Seed: 1588250891 00:07:44.314 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:44.314 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:44.314 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:44.314 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.314 #2 INITED exec/s: 0 rss: 67Mb 00:07:44.314 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.314 This may also happen if the target rejected all inputs we tried so far 00:07:44.314 [2024-12-09 13:14:46.488253] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:44.841 NEW_FUNC[1/677]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:44.841 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.841 #41 NEW cov: 11244 ft: 11074 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 ShuffleBytes-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:45.103 #42 NEW cov: 11258 ft: 14362 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:45.103 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:45.103 #43 NEW cov: 11275 ft: 15434 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:07:45.364 #44 NEW cov: 11275 ft: 16192 corp: 5/129b lim: 32 exec/s: 44 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:45.624 #45 NEW cov: 11275 ft: 16254 corp: 6/161b lim: 32 exec/s: 45 rss: 75Mb L: 32/32 MS: 1 ChangeASCIIInt- 00:07:45.624 [2024-12-09 13:14:47.755685] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=306 offset=0x38000000000000 prot=0x3: Invalid argument 00:07:45.624 [2024-12-09 13:14:47.755722] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x38000000000000 flags=0x3: Invalid argument 00:07:45.624 [2024-12-09 13:14:47.755733] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:45.624 [2024-12-09 13:14:47.755750] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:45.624 [2024-12-09 13:14:47.756719] vfio_user.c:3141:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:45.624 [2024-12-09 13:14:47.756739] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:45.624 [2024-12-09 13:14:47.756755] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:45.885 NEW_FUNC[1/1]: 0x1590bf8 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3131 00:07:45.885 #50 NEW cov: 11288 ft: 16508 corp: 7/193b lim: 32 exec/s: 50 rss: 75Mb L: 32/32 MS: 5 ChangeByte-CrossOver-InsertByte-InsertRepeatedBytes-CopyPart- 00:07:45.885 #51 NEW cov: 11288 ft: 17256 corp: 8/225b lim: 32 exec/s: 51 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:46.146 #52 NEW cov: 11295 ft: 17620 corp: 9/257b lim: 32 exec/s: 52 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:46.146 [2024-12-09 13:14:48.329475] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=306 offset=0x38200000000000 prot=0x3: Invalid argument 00:07:46.146 [2024-12-09 13:14:48.329499] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x38200000000000 flags=0x3: Invalid argument 00:07:46.146 [2024-12-09 13:14:48.329510] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:46.146 [2024-12-09 13:14:48.329527] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:46.146 [2024-12-09 13:14:48.330462] vfio_user.c:3141:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:46.146 [2024-12-09 13:14:48.330482] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:46.146 [2024-12-09 13:14:48.330497] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:46.407 #53 NEW cov: 11295 ft: 17680 corp: 10/289b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:46.407 #59 NEW cov: 11295 ft: 17934 corp: 11/321b lim: 32 exec/s: 29 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:46.407 #59 DONE cov: 11295 ft: 17934 corp: 11/321b lim: 32 exec/s: 29 rss: 75Mb 00:07:46.407 Done 59 runs in 2 second(s) 00:07:46.407 [2024-12-09 13:14:48.619784] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:46.669 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.669 13:14:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:46.669 [2024-12-09 13:14:48.887840] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:46.669 [2024-12-09 13:14:48.887908] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308129 ] 00:07:46.930 [2024-12-09 13:14:48.983277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.930 [2024-12-09 13:14:49.023851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.191 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.191 INFO: Seed: 71267423 00:07:47.191 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:47.191 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:47.191 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:47.191 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.191 #2 INITED exec/s: 0 rss: 68Mb 00:07:47.191 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.191 This may also happen if the target rejected all inputs we tried so far 00:07:47.191 [2024-12-09 13:14:49.269757] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:47.191 [2024-12-09 13:14:49.341795] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.191 [2024-12-09 13:14:49.341833] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.712 NEW_FUNC[1/678]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:47.712 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.712 #22 NEW cov: 11253 ft: 10786 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 InsertRepeatedBytes-CrossOver-ShuffleBytes-InsertRepeatedBytes-CopyPart- 00:07:47.712 [2024-12-09 13:14:49.813449] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.712 [2024-12-09 13:14:49.813491] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.712 #26 NEW cov: 11267 ft: 14590 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 4 InsertByte-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:47.973 [2024-12-09 13:14:50.023496] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.973 [2024-12-09 13:14:50.023530] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.973 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.973 #27 NEW cov: 11284 ft: 15748 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:48.233 [2024-12-09 13:14:50.228219] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.234 [2024-12-09 13:14:50.228252] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.234 #28 NEW cov: 11284 ft: 15849 corp: 5/53b lim: 13 exec/s: 28 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:48.234 [2024-12-09 13:14:50.417558] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.234 [2024-12-09 13:14:50.417593] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.494 #29 NEW cov: 11284 ft: 16772 corp: 6/66b lim: 13 exec/s: 29 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:48.494 [2024-12-09 13:14:50.612305] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.494 [2024-12-09 13:14:50.612336] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.494 #30 NEW cov: 11284 ft: 17193 corp: 7/79b lim: 13 exec/s: 30 rss: 77Mb L: 13/13 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:07:48.755 [2024-12-09 13:14:50.800050] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.755 [2024-12-09 13:14:50.800081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.755 #35 NEW cov: 11284 ft: 17284 corp: 8/92b lim: 13 exec/s: 35 rss: 77Mb L: 13/13 MS: 5 EraseBytes-ChangeBinInt-InsertRepeatedBytes-InsertByte-CrossOver- 00:07:49.015 [2024-12-09 13:14:51.008574] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.015 [2024-12-09 13:14:51.008612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.015 #36 NEW cov: 11291 ft: 18060 corp: 9/105b lim: 13 exec/s: 36 rss: 77Mb L: 13/13 MS: 1 CMP- DE: "\177\000\000\000\000\000\000\000"- 00:07:49.015 [2024-12-09 13:14:51.221086] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.015 [2024-12-09 13:14:51.221116] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.276 #37 NEW cov: 11291 ft: 18281 corp: 10/118b lim: 13 exec/s: 18 rss: 77Mb L: 13/13 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:49.276 #37 DONE cov: 11291 ft: 18281 corp: 10/118b lim: 13 exec/s: 18 rss: 77Mb 00:07:49.276 ###### Recommended dictionary. ###### 00:07:49.276 "\000\000\000\000\000\000\000\001" # Uses: 0 00:07:49.276 "\177\000\000\000\000\000\000\000" # Uses: 0 00:07:49.276 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:49.276 ###### End of recommended dictionary. ###### 00:07:49.276 Done 37 runs in 2 second(s) 00:07:49.276 [2024-12-09 13:14:51.356791] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:49.537 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:49.537 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:49.537 [2024-12-09 13:14:51.623794] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:49.537 [2024-12-09 13:14:51.623874] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308666 ] 00:07:49.537 [2024-12-09 13:14:51.719377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.537 [2024-12-09 13:14:51.759340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.798 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.798 INFO: Seed: 2806289837 00:07:49.798 INFO: Loaded 1 modules (387862 inline 8-bit counters): 387862 [0x2c4570c, 0x2ca4222), 00:07:49.798 INFO: Loaded 1 PC tables (387862 PCs): 387862 [0x2ca4228,0x328f388), 00:07:49.798 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.798 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.798 #2 INITED exec/s: 0 rss: 67Mb 00:07:49.798 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.798 This may also happen if the target rejected all inputs we tried so far 00:07:49.798 [2024-12-09 13:14:52.004880] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:50.059 [2024-12-09 13:14:52.072803] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.059 [2024-12-09 13:14:52.072835] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.321 NEW_FUNC[1/678]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:50.321 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:50.321 #14 NEW cov: 11245 ft: 10719 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:50.321 [2024-12-09 13:14:52.549877] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.321 [2024-12-09 13:14:52.549917] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.582 #15 NEW cov: 11259 ft: 13823 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:50.582 [2024-12-09 13:14:52.720358] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.582 [2024-12-09 13:14:52.720389] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.582 NEW_FUNC[1/1]: 0x1c22738 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:50.582 #16 NEW cov: 11276 ft: 15118 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:50.843 [2024-12-09 13:14:52.889597] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.843 [2024-12-09 13:14:52.889627] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.843 #17 NEW cov: 11276 ft: 15304 corp: 5/37b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:50.843 [2024-12-09 13:14:53.062409] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.843 [2024-12-09 13:14:53.062438] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.104 #18 NEW cov: 11276 ft: 16531 corp: 6/46b lim: 9 exec/s: 18 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:51.104 [2024-12-09 13:14:53.231983] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.104 [2024-12-09 13:14:53.232013] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.104 #19 NEW cov: 11276 ft: 16672 corp: 7/55b lim: 9 exec/s: 19 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:07:51.365 [2024-12-09 13:14:53.406722] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.365 [2024-12-09 13:14:53.406751] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.365 #20 NEW cov: 11276 ft: 16992 corp: 8/64b lim: 9 exec/s: 20 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:07:51.365 [2024-12-09 13:14:53.579758] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.365 [2024-12-09 13:14:53.579789] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.626 #21 NEW cov: 11276 ft: 17014 corp: 9/73b lim: 9 exec/s: 21 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:51.626 [2024-12-09 13:14:53.749643] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.626 [2024-12-09 13:14:53.749673] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.626 #22 NEW cov: 11283 ft: 17357 corp: 10/82b lim: 9 exec/s: 22 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:51.887 [2024-12-09 13:14:53.921559] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.887 [2024-12-09 13:14:53.921594] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.887 #23 NEW cov: 11283 ft: 17511 corp: 11/91b lim: 9 exec/s: 11 rss: 76Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:51.887 #23 DONE cov: 11283 ft: 17511 corp: 11/91b lim: 9 exec/s: 11 rss: 76Mb 00:07:51.887 ###### Recommended dictionary. ###### 00:07:51.887 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:51.887 ###### End of recommended dictionary. ###### 00:07:51.887 Done 23 runs in 2 second(s) 00:07:51.887 [2024-12-09 13:14:54.049780] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:52.148 00:07:52.148 real 0m19.878s 00:07:52.148 user 0m28.109s 00:07:52.148 sys 0m1.996s 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.148 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 ************************************ 00:07:52.148 END TEST vfio_llvm_fuzz 00:07:52.148 ************************************ 00:07:52.148 00:07:52.148 real 1m24.078s 00:07:52.148 user 2m8.385s 00:07:52.148 sys 0m9.715s 00:07:52.148 13:14:54 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.148 13:14:54 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 ************************************ 00:07:52.148 END TEST llvm_fuzz 00:07:52.148 ************************************ 00:07:52.148 13:14:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:07:52.148 13:14:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:07:52.148 13:14:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:07:52.148 13:14:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.148 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 13:14:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:07:52.148 13:14:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:07:52.148 13:14:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:07:52.148 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.736 INFO: APP EXITING 00:07:58.736 INFO: killing all VMs 00:07:58.736 INFO: killing vhost app 00:07:58.736 INFO: EXIT DONE 00:08:02.039 Waiting for block devices as requested 00:08:02.039 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:02.039 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:02.298 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:02.298 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:02.298 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:02.558 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:02.558 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:02.558 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:02.818 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:02.818 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:02.818 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:03.079 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:03.079 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:03.079 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:03.339 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:03.339 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:03.339 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:07.542 Cleaning 00:08:07.542 Removing: /dev/shm/spdk_tgt_trace.pid281231 00:08:07.542 Removing: /var/run/dpdk/spdk_pid278762 00:08:07.542 Removing: /var/run/dpdk/spdk_pid279937 00:08:07.542 Removing: /var/run/dpdk/spdk_pid281231 00:08:07.542 Removing: /var/run/dpdk/spdk_pid281689 00:08:07.542 Removing: /var/run/dpdk/spdk_pid282772 00:08:07.542 Removing: /var/run/dpdk/spdk_pid282827 00:08:07.542 Removing: /var/run/dpdk/spdk_pid283910 00:08:07.542 Removing: /var/run/dpdk/spdk_pid283918 00:08:07.542 Removing: /var/run/dpdk/spdk_pid284347 00:08:07.542 Removing: /var/run/dpdk/spdk_pid284682 00:08:07.543 Removing: /var/run/dpdk/spdk_pid285003 00:08:07.543 Removing: /var/run/dpdk/spdk_pid285340 00:08:07.543 Removing: /var/run/dpdk/spdk_pid285615 00:08:07.543 Removing: /var/run/dpdk/spdk_pid285763 00:08:07.543 Removing: /var/run/dpdk/spdk_pid285993 00:08:07.543 Removing: /var/run/dpdk/spdk_pid286308 00:08:07.543 Removing: /var/run/dpdk/spdk_pid287064 00:08:07.543 Removing: /var/run/dpdk/spdk_pid290101 00:08:07.543 Removing: /var/run/dpdk/spdk_pid290349 00:08:07.543 Removing: /var/run/dpdk/spdk_pid290647 00:08:07.543 Removing: /var/run/dpdk/spdk_pid290658 00:08:07.543 Removing: /var/run/dpdk/spdk_pid291222 00:08:07.543 Removing: /var/run/dpdk/spdk_pid291290 00:08:07.543 Removing: /var/run/dpdk/spdk_pid291795 00:08:07.543 Removing: /var/run/dpdk/spdk_pid291834 00:08:07.543 Removing: /var/run/dpdk/spdk_pid292189 00:08:07.543 Removing: /var/run/dpdk/spdk_pid292354 00:08:07.543 Removing: /var/run/dpdk/spdk_pid292457 00:08:07.543 Removing: /var/run/dpdk/spdk_pid292655 00:08:07.543 Removing: /var/run/dpdk/spdk_pid293044 00:08:07.543 Removing: /var/run/dpdk/spdk_pid293324 00:08:07.543 Removing: /var/run/dpdk/spdk_pid293612 00:08:07.543 Removing: /var/run/dpdk/spdk_pid293823 00:08:07.543 Removing: /var/run/dpdk/spdk_pid294442 00:08:07.543 Removing: /var/run/dpdk/spdk_pid294967 00:08:07.543 Removing: /var/run/dpdk/spdk_pid295263 00:08:07.543 Removing: /var/run/dpdk/spdk_pid295796 00:08:07.543 Removing: /var/run/dpdk/spdk_pid296259 00:08:07.543 Removing: /var/run/dpdk/spdk_pid296616 00:08:07.543 Removing: /var/run/dpdk/spdk_pid297160 00:08:07.543 Removing: /var/run/dpdk/spdk_pid297560 00:08:07.543 Removing: /var/run/dpdk/spdk_pid298002 00:08:07.543 Removing: /var/run/dpdk/spdk_pid298537 00:08:07.543 Removing: /var/run/dpdk/spdk_pid298850 00:08:07.543 Removing: /var/run/dpdk/spdk_pid299356 00:08:07.543 Removing: /var/run/dpdk/spdk_pid299889 00:08:07.543 Removing: /var/run/dpdk/spdk_pid300184 00:08:07.543 Removing: /var/run/dpdk/spdk_pid300711 00:08:07.543 Removing: /var/run/dpdk/spdk_pid301208 00:08:07.543 Removing: /var/run/dpdk/spdk_pid301529 00:08:07.543 Removing: /var/run/dpdk/spdk_pid302069 00:08:07.543 Removing: /var/run/dpdk/spdk_pid302469 00:08:07.543 Removing: /var/run/dpdk/spdk_pid302885 00:08:07.543 Removing: /var/run/dpdk/spdk_pid303415 00:08:07.543 Removing: /var/run/dpdk/spdk_pid303743 00:08:07.543 Removing: /var/run/dpdk/spdk_pid304241 00:08:07.543 Removing: /var/run/dpdk/spdk_pid304768 00:08:07.543 Removing: /var/run/dpdk/spdk_pid305060 00:08:07.543 Removing: /var/run/dpdk/spdk_pid305691 00:08:07.543 Removing: /var/run/dpdk/spdk_pid306219 00:08:07.543 Removing: /var/run/dpdk/spdk_pid306761 00:08:07.543 Removing: /var/run/dpdk/spdk_pid307297 00:08:07.543 Removing: /var/run/dpdk/spdk_pid307740 00:08:07.543 Removing: /var/run/dpdk/spdk_pid308129 00:08:07.543 Removing: /var/run/dpdk/spdk_pid308666 00:08:07.543 Clean 00:08:07.543 13:15:09 -- common/autotest_common.sh@1453 -- # return 0 00:08:07.543 13:15:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:07.543 13:15:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.543 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:07.543 13:15:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:07.543 13:15:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.543 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:07.543 13:15:09 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:07.543 13:15:09 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:07.543 13:15:09 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:07.543 13:15:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:07.543 13:15:09 -- spdk/autotest.sh@398 -- # hostname 00:08:07.543 13:15:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:07.543 geninfo: WARNING: invalid characters removed from testname! 00:08:12.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:18.105 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:20.179 13:15:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:28.428 13:15:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:33.698 13:15:35 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:38.973 13:15:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:44.251 13:15:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:49.530 13:15:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:54.808 13:15:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:54.808 13:15:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:08:54.808 13:15:56 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:08:54.808 13:15:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:54.808 13:15:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:54.808 13:15:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:54.808 + [[ -n 167454 ]] 00:08:54.808 + sudo kill 167454 00:08:54.819 [Pipeline] } 00:08:54.832 [Pipeline] // stage 00:08:54.837 [Pipeline] } 00:08:54.849 [Pipeline] // timeout 00:08:54.854 [Pipeline] } 00:08:54.866 [Pipeline] // catchError 00:08:54.871 [Pipeline] } 00:08:54.883 [Pipeline] // wrap 00:08:54.889 [Pipeline] } 00:08:54.901 [Pipeline] // catchError 00:08:54.908 [Pipeline] stage 00:08:54.910 [Pipeline] { (Epilogue) 00:08:54.921 [Pipeline] catchError 00:08:54.923 [Pipeline] { 00:08:54.934 [Pipeline] echo 00:08:54.935 Cleanup processes 00:08:54.941 [Pipeline] sh 00:08:55.231 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:55.231 317891 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:55.244 [Pipeline] sh 00:08:55.531 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:55.531 ++ awk '{print $1}' 00:08:55.531 ++ grep -v 'sudo pgrep' 00:08:55.531 + sudo kill -9 00:08:55.531 + true 00:08:55.543 [Pipeline] sh 00:08:55.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:55.830 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:08:55.830 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:08:57.211 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:09:07.213 [Pipeline] sh 00:09:07.509 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:07.509 Artifacts sizes are good 00:09:07.523 [Pipeline] archiveArtifacts 00:09:07.530 Archiving artifacts 00:09:07.674 [Pipeline] sh 00:09:07.964 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:07.977 [Pipeline] cleanWs 00:09:07.986 [WS-CLEANUP] Deleting project workspace... 00:09:07.986 [WS-CLEANUP] Deferred wipeout is used... 00:09:07.993 [WS-CLEANUP] done 00:09:07.995 [Pipeline] } 00:09:08.009 [Pipeline] // catchError 00:09:08.018 [Pipeline] sh 00:09:08.301 + logger -p user.info -t JENKINS-CI 00:09:08.309 [Pipeline] } 00:09:08.321 [Pipeline] // stage 00:09:08.325 [Pipeline] } 00:09:08.338 [Pipeline] // node 00:09:08.341 [Pipeline] End of Pipeline 00:09:08.365 Finished: SUCCESS