00:00:00.001 Started by upstream project "autotest-per-patch" build number 131922 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.261 > git --version # 'git version 2.39.2' 00:00:00.261 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.542 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.553 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.565 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.565 > git config core.sparsecheckout # timeout=10 00:00:06.574 > git read-tree -mu HEAD # timeout=10 00:00:06.589 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.606 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.606 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.688 [Pipeline] Start of Pipeline 00:00:06.702 [Pipeline] library 00:00:06.704 Loading library shm_lib@master 00:00:06.704 Library shm_lib@master is cached. Copying from home. 00:00:06.724 [Pipeline] node 00:00:06.739 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:06.741 [Pipeline] { 00:00:06.753 [Pipeline] catchError 00:00:06.754 [Pipeline] { 00:00:06.766 [Pipeline] wrap 00:00:06.777 [Pipeline] { 00:00:06.785 [Pipeline] stage 00:00:06.787 [Pipeline] { (Prologue) 00:00:06.998 [Pipeline] sh 00:00:07.285 + logger -p user.info -t JENKINS-CI 00:00:07.303 [Pipeline] echo 00:00:07.305 Node: WFP49 00:00:07.313 [Pipeline] sh 00:00:07.614 [Pipeline] setCustomBuildProperty 00:00:07.629 [Pipeline] echo 00:00:07.630 Cleanup processes 00:00:07.634 [Pipeline] sh 00:00:07.916 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.916 2987685 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.929 [Pipeline] sh 00:00:08.215 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.215 ++ grep -v 'sudo pgrep' 00:00:08.215 ++ awk '{print $1}' 00:00:08.215 + sudo kill -9 00:00:08.215 + true 00:00:08.228 [Pipeline] cleanWs 00:00:08.237 [WS-CLEANUP] Deleting project workspace... 00:00:08.237 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.248 [WS-CLEANUP] done 00:00:08.255 [Pipeline] setCustomBuildProperty 00:00:08.302 [Pipeline] sh 00:00:08.590 + sudo git config --global --replace-all safe.directory '*' 00:00:08.681 [Pipeline] httpRequest 00:00:09.787 [Pipeline] echo 00:00:09.788 Sorcerer 10.211.164.101 is alive 00:00:09.796 [Pipeline] retry 00:00:09.797 [Pipeline] { 00:00:09.806 [Pipeline] httpRequest 00:00:09.809 HttpMethod: GET 00:00:09.809 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.810 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.826 Response Code: HTTP/1.1 200 OK 00:00:09.826 Success: Status code 200 is in the accepted range: 200,404 00:00:09.827 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:15.954 [Pipeline] } 00:00:15.972 [Pipeline] // retry 00:00:15.980 [Pipeline] sh 00:00:16.265 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:16.282 [Pipeline] httpRequest 00:00:16.876 [Pipeline] echo 00:00:16.878 Sorcerer 10.211.164.101 is alive 00:00:16.888 [Pipeline] retry 00:00:16.890 [Pipeline] { 00:00:16.905 [Pipeline] httpRequest 00:00:16.910 HttpMethod: GET 00:00:16.910 URL: http://10.211.164.101/packages/spdk_344e7bdd4e2602d63f7401d850c23f41629710e7.tar.gz 00:00:16.911 Sending request to url: http://10.211.164.101/packages/spdk_344e7bdd4e2602d63f7401d850c23f41629710e7.tar.gz 00:00:16.931 Response Code: HTTP/1.1 200 OK 00:00:16.932 Success: Status code 200 is in the accepted range: 200,404 00:00:16.932 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_344e7bdd4e2602d63f7401d850c23f41629710e7.tar.gz 00:01:24.378 [Pipeline] } 00:01:24.398 [Pipeline] // retry 00:01:24.407 [Pipeline] sh 00:01:24.696 + tar --no-same-owner -xf spdk_344e7bdd4e2602d63f7401d850c23f41629710e7.tar.gz 00:01:27.243 [Pipeline] sh 00:01:27.527 + git -C spdk log --oneline -n5 00:01:27.527 344e7bdd4 lib/nvme: eventfd to handle disconnected I/O qpair 00:01:27.527 3943a020a nvme/poll_group: create and manage fd_group for nvme poll group 00:01:27.527 b1c3993ef nvme: interface to check disconnected queue pairs 00:01:27.527 6eb50bfcf lib/nvme: add opts_size to spdk_nvme_io_qpair_opts 00:01:27.527 0f14e29bd thread: Extended options for spdk_interrupt_register 00:01:27.539 [Pipeline] } 00:01:27.554 [Pipeline] // stage 00:01:27.563 [Pipeline] stage 00:01:27.565 [Pipeline] { (Prepare) 00:01:27.582 [Pipeline] writeFile 00:01:27.597 [Pipeline] sh 00:01:27.879 + logger -p user.info -t JENKINS-CI 00:01:27.895 [Pipeline] sh 00:01:28.177 + logger -p user.info -t JENKINS-CI 00:01:28.190 [Pipeline] sh 00:01:28.475 + cat autorun-spdk.conf 00:01:28.475 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.475 SPDK_TEST_FUZZER_SHORT=1 00:01:28.475 SPDK_TEST_FUZZER=1 00:01:28.475 SPDK_TEST_SETUP=1 00:01:28.475 SPDK_RUN_UBSAN=1 00:01:28.483 RUN_NIGHTLY=0 00:01:28.490 [Pipeline] readFile 00:01:28.522 [Pipeline] withEnv 00:01:28.525 [Pipeline] { 00:01:28.538 [Pipeline] sh 00:01:28.823 + set -ex 00:01:28.823 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:28.823 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:28.823 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.823 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:28.823 ++ SPDK_TEST_FUZZER=1 00:01:28.823 ++ SPDK_TEST_SETUP=1 00:01:28.823 ++ SPDK_RUN_UBSAN=1 00:01:28.823 ++ RUN_NIGHTLY=0 00:01:28.823 + case $SPDK_TEST_NVMF_NICS in 00:01:28.823 + DRIVERS= 00:01:28.823 + [[ -n '' ]] 00:01:28.823 + exit 0 00:01:28.831 [Pipeline] } 00:01:28.846 [Pipeline] // withEnv 00:01:28.851 [Pipeline] } 00:01:28.864 [Pipeline] // stage 00:01:28.873 [Pipeline] catchError 00:01:28.875 [Pipeline] { 00:01:28.889 [Pipeline] timeout 00:01:28.889 Timeout set to expire in 30 min 00:01:28.891 [Pipeline] { 00:01:28.904 [Pipeline] stage 00:01:28.906 [Pipeline] { (Tests) 00:01:28.920 [Pipeline] sh 00:01:29.205 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:29.205 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:29.205 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:29.205 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:29.205 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:29.205 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:29.205 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:29.205 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:29.205 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:29.205 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:29.205 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:29.205 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:29.205 + source /etc/os-release 00:01:29.205 ++ NAME='Fedora Linux' 00:01:29.205 ++ VERSION='39 (Cloud Edition)' 00:01:29.205 ++ ID=fedora 00:01:29.205 ++ VERSION_ID=39 00:01:29.205 ++ VERSION_CODENAME= 00:01:29.205 ++ PLATFORM_ID=platform:f39 00:01:29.205 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:29.205 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.205 ++ LOGO=fedora-logo-icon 00:01:29.205 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:29.205 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.205 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:29.205 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.205 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.205 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.205 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:29.205 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.205 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:29.205 ++ SUPPORT_END=2024-11-12 00:01:29.205 ++ VARIANT='Cloud Edition' 00:01:29.205 ++ VARIANT_ID=cloud 00:01:29.205 + uname -a 00:01:29.205 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:29.205 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:32.497 Hugepages 00:01:32.497 node hugesize free / total 00:01:32.497 node0 1048576kB 0 / 0 00:01:32.497 node0 2048kB 0 / 0 00:01:32.497 node1 1048576kB 0 / 0 00:01:32.497 node1 2048kB 0 / 0 00:01:32.497 00:01:32.497 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.497 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:32.497 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:32.497 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:32.497 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:32.497 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:32.497 + rm -f /tmp/spdk-ld-path 00:01:32.497 + source autorun-spdk.conf 00:01:32.497 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.497 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:32.497 ++ SPDK_TEST_FUZZER=1 00:01:32.497 ++ SPDK_TEST_SETUP=1 00:01:32.497 ++ SPDK_RUN_UBSAN=1 00:01:32.497 ++ RUN_NIGHTLY=0 00:01:32.497 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.497 + [[ -n '' ]] 00:01:32.497 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:32.497 + for M in /var/spdk/build-*-manifest.txt 00:01:32.497 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.497 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:32.497 + for M in /var/spdk/build-*-manifest.txt 00:01:32.497 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.497 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:32.497 + for M in /var/spdk/build-*-manifest.txt 00:01:32.497 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.497 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:32.497 ++ uname 00:01:32.497 + [[ Linux == \L\i\n\u\x ]] 00:01:32.497 + sudo dmesg -T 00:01:32.497 + sudo dmesg --clear 00:01:32.497 + dmesg_pid=2988536 00:01:32.497 + [[ Fedora Linux == FreeBSD ]] 00:01:32.497 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.497 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.497 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.497 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.497 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.497 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.497 + sudo dmesg -Tw 00:01:32.497 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.497 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.497 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.497 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.497 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.497 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.497 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.497 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.497 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:32.497 22:08:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.497 22:08:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:32.497 22:08:51 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:32.497 22:08:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.497 22:08:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:32.497 22:08:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.497 22:08:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:32.497 22:08:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.497 22:08:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.497 22:08:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.497 22:08:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.497 22:08:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.497 22:08:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.497 22:08:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.497 22:08:51 -- paths/export.sh@5 -- $ export PATH 00:01:32.497 22:08:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.497 22:08:51 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:32.497 22:08:51 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:32.497 22:08:51 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730236131.XXXXXX 00:01:32.497 22:08:51 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730236131.ONFz3l 00:01:32.497 22:08:51 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:32.497 22:08:51 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:32.497 22:08:51 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:32.497 22:08:51 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.497 22:08:51 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.497 22:08:51 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:32.497 22:08:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:32.497 22:08:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.497 22:08:51 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:32.497 22:08:51 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:32.497 22:08:51 -- pm/common@17 -- $ local monitor 00:01:32.497 22:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.497 22:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.497 22:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.497 22:08:51 -- pm/common@21 -- $ date +%s 00:01:32.497 22:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.497 22:08:51 -- pm/common@21 -- $ date +%s 00:01:32.497 22:08:51 -- pm/common@25 -- $ sleep 1 00:01:32.497 22:08:51 -- pm/common@21 -- $ date +%s 00:01:32.497 22:08:51 -- pm/common@21 -- $ date +%s 00:01:32.498 22:08:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730236131 00:01:32.498 22:08:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730236131 00:01:32.498 22:08:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730236131 00:01:32.498 22:08:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730236131 00:01:32.498 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730236131_collect-vmstat.pm.log 00:01:32.498 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730236131_collect-cpu-load.pm.log 00:01:32.498 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730236131_collect-cpu-temp.pm.log 00:01:32.498 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730236131_collect-bmc-pm.bmc.pm.log 00:01:33.435 22:08:52 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:33.435 22:08:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.435 22:08:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.435 22:08:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:33.435 22:08:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.435 Tue Oct 29 09:08:52 PM UTC 2024 00:01:33.435 22:08:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.435 v25.01-pre-139-g344e7bdd4 00:01:33.435 22:08:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.435 22:08:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.435 22:08:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.435 22:08:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:33.435 22:08:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:33.435 22:08:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.435 ************************************ 00:01:33.435 START TEST ubsan 00:01:33.435 ************************************ 00:01:33.435 22:08:52 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:33.435 using ubsan 00:01:33.435 00:01:33.435 real 0m0.001s 00:01:33.435 user 0m0.001s 00:01:33.435 sys 0m0.000s 00:01:33.435 22:08:52 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:33.435 22:08:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.435 ************************************ 00:01:33.435 END TEST ubsan 00:01:33.435 ************************************ 00:01:33.693 22:08:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.693 22:08:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.693 22:08:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.693 22:08:52 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:33.693 22:08:52 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:33.693 22:08:52 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:33.693 22:08:52 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:01:33.693 22:08:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:33.693 22:08:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.693 ************************************ 00:01:33.693 START TEST autobuild_llvm_precompile 00:01:33.693 ************************************ 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autotest_common.sh@1127 -- $ _llvm_precompile 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:33.693 Target: x86_64-redhat-linux-gnu 00:01:33.693 Thread model: posix 00:01:33.693 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:33.693 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:33.694 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:33.694 22:08:53 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:33.953 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:33.953 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:34.211 Using 'verbs' RDMA provider 00:01:50.469 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:05.357 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:05.357 Creating mk/config.mk...done. 00:02:05.357 Creating mk/cc.flags.mk...done. 00:02:05.357 Type 'make' to build. 00:02:05.357 00:02:05.357 real 0m29.929s 00:02:05.357 user 0m13.300s 00:02:05.357 sys 0m16.098s 00:02:05.357 22:09:22 autobuild_llvm_precompile -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:05.357 22:09:22 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:05.357 ************************************ 00:02:05.357 END TEST autobuild_llvm_precompile 00:02:05.357 ************************************ 00:02:05.357 22:09:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:05.357 22:09:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:05.357 22:09:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:05.357 22:09:22 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:05.357 22:09:22 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:02:05.357 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:05.357 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:05.357 Using 'verbs' RDMA provider 00:02:17.576 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:29.799 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:29.799 Creating mk/config.mk...done. 00:02:29.799 Creating mk/cc.flags.mk...done. 00:02:29.799 Type 'make' to build. 00:02:29.799 22:09:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:29.799 22:09:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:29.799 22:09:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:29.799 22:09:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.799 ************************************ 00:02:29.799 START TEST make 00:02:29.799 ************************************ 00:02:29.799 22:09:48 make -- common/autotest_common.sh@1127 -- $ make -j72 00:02:29.799 make[1]: Nothing to be done for 'all'. 00:02:31.185 The Meson build system 00:02:31.185 Version: 1.5.0 00:02:31.185 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:31.185 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.185 Build type: native build 00:02:31.185 Project name: libvfio-user 00:02:31.185 Project version: 0.0.1 00:02:31.185 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:31.185 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:31.185 Host machine cpu family: x86_64 00:02:31.185 Host machine cpu: x86_64 00:02:31.185 Run-time dependency threads found: YES 00:02:31.185 Library dl found: YES 00:02:31.185 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.185 Run-time dependency json-c found: YES 0.17 00:02:31.185 Run-time dependency cmocka found: YES 1.1.7 00:02:31.185 Program pytest-3 found: NO 00:02:31.185 Program flake8 found: NO 00:02:31.185 Program misspell-fixer found: NO 00:02:31.185 Program restructuredtext-lint found: NO 00:02:31.185 Program valgrind found: YES (/usr/bin/valgrind) 00:02:31.185 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.185 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.185 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.185 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.185 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:31.185 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:31.185 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.185 Build targets in project: 8 00:02:31.185 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:31.185 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:31.185 00:02:31.185 libvfio-user 0.0.1 00:02:31.185 00:02:31.185 User defined options 00:02:31.185 buildtype : debug 00:02:31.185 default_library: static 00:02:31.185 libdir : /usr/local/lib 00:02:31.185 00:02:31.185 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.754 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.754 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:31.754 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:31.754 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:31.754 [4/36] Compiling C object samples/null.p/null.c.o 00:02:31.754 [5/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:31.754 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:31.754 [7/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:31.754 [8/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:31.754 [9/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:31.754 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:31.754 [11/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:31.754 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:31.754 [13/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:31.754 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:31.754 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:31.754 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:31.754 [17/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:31.754 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:31.754 [19/36] Compiling C object samples/server.p/server.c.o 00:02:31.754 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:31.754 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:31.754 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:31.754 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:31.754 [24/36] Compiling C object samples/client.p/client.c.o 00:02:31.754 [25/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:31.754 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:31.754 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:31.754 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.754 [29/36] Linking static target lib/libvfio-user.a 00:02:31.754 [30/36] Linking target samples/client 00:02:31.754 [31/36] Linking target test/unit_tests 00:02:31.754 [32/36] Linking target samples/shadow_ioeventfd_server 00:02:31.754 [33/36] Linking target samples/null 00:02:31.754 [34/36] Linking target samples/gpio-pci-idio-16 00:02:31.754 [35/36] Linking target samples/server 00:02:32.013 [36/36] Linking target samples/lspci 00:02:32.013 INFO: autodetecting backend as ninja 00:02:32.013 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.013 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.273 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.273 ninja: no work to do. 00:02:38.851 The Meson build system 00:02:38.851 Version: 1.5.0 00:02:38.851 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:38.851 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:38.851 Build type: native build 00:02:38.851 Program cat found: YES (/usr/bin/cat) 00:02:38.851 Project name: DPDK 00:02:38.851 Project version: 24.03.0 00:02:38.851 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:38.851 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:38.851 Host machine cpu family: x86_64 00:02:38.851 Host machine cpu: x86_64 00:02:38.851 Message: ## Building in Developer Mode ## 00:02:38.851 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.851 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:38.851 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.851 Program python3 found: YES (/usr/bin/python3) 00:02:38.851 Program cat found: YES (/usr/bin/cat) 00:02:38.851 Compiler for C supports arguments -march=native: YES 00:02:38.851 Checking for size of "void *" : 8 00:02:38.851 Checking for size of "void *" : 8 (cached) 00:02:38.851 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:38.851 Library m found: YES 00:02:38.851 Library numa found: YES 00:02:38.851 Has header "numaif.h" : YES 00:02:38.851 Library fdt found: NO 00:02:38.851 Library execinfo found: NO 00:02:38.851 Has header "execinfo.h" : YES 00:02:38.851 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.851 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.851 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.851 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.851 Run-time dependency openssl found: YES 3.1.1 00:02:38.851 Run-time dependency libpcap found: YES 1.10.4 00:02:38.851 Has header "pcap.h" with dependency libpcap: YES 00:02:38.851 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.851 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.851 Compiler for C supports arguments -Wformat: YES 00:02:38.851 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:38.851 Compiler for C supports arguments -Wformat-security: YES 00:02:38.851 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.851 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.851 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.851 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.851 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.851 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.851 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.851 Compiler for C supports arguments -Wundef: YES 00:02:38.851 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.851 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.851 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:38.851 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.851 Program objdump found: YES (/usr/bin/objdump) 00:02:38.851 Compiler for C supports arguments -mavx512f: YES 00:02:38.851 Checking if "AVX512 checking" compiles: YES 00:02:38.851 Fetching value of define "__SSE4_2__" : 1 00:02:38.851 Fetching value of define "__AES__" : 1 00:02:38.851 Fetching value of define "__AVX__" : 1 00:02:38.851 Fetching value of define "__AVX2__" : 1 00:02:38.851 Fetching value of define "__AVX512BW__" : 1 00:02:38.851 Fetching value of define "__AVX512CD__" : 1 00:02:38.851 Fetching value of define "__AVX512DQ__" : 1 00:02:38.851 Fetching value of define "__AVX512F__" : 1 00:02:38.851 Fetching value of define "__AVX512VL__" : 1 00:02:38.851 Fetching value of define "__PCLMUL__" : 1 00:02:38.851 Fetching value of define "__RDRND__" : 1 00:02:38.851 Fetching value of define "__RDSEED__" : 1 00:02:38.851 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:38.851 Fetching value of define "__znver1__" : (undefined) 00:02:38.851 Fetching value of define "__znver2__" : (undefined) 00:02:38.851 Fetching value of define "__znver3__" : (undefined) 00:02:38.851 Fetching value of define "__znver4__" : (undefined) 00:02:38.851 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:38.851 Message: lib/log: Defining dependency "log" 00:02:38.851 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.851 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.851 Checking for function "getentropy" : NO 00:02:38.851 Message: lib/eal: Defining dependency "eal" 00:02:38.851 Message: lib/ring: Defining dependency "ring" 00:02:38.851 Message: lib/rcu: Defining dependency "rcu" 00:02:38.851 Message: lib/mempool: Defining dependency "mempool" 00:02:38.851 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.851 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.851 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:38.851 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:38.851 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:38.851 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:38.851 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:38.851 Compiler for C supports arguments -mpclmul: YES 00:02:38.851 Compiler for C supports arguments -maes: YES 00:02:38.851 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.851 Compiler for C supports arguments -mavx512bw: YES 00:02:38.851 Compiler for C supports arguments -mavx512dq: YES 00:02:38.851 Compiler for C supports arguments -mavx512vl: YES 00:02:38.851 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.851 Compiler for C supports arguments -mavx2: YES 00:02:38.851 Compiler for C supports arguments -mavx: YES 00:02:38.851 Message: lib/net: Defining dependency "net" 00:02:38.851 Message: lib/meter: Defining dependency "meter" 00:02:38.851 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.851 Message: lib/pci: Defining dependency "pci" 00:02:38.851 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.851 Message: lib/hash: Defining dependency "hash" 00:02:38.851 Message: lib/timer: Defining dependency "timer" 00:02:38.851 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.851 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.851 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.851 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.851 Message: lib/power: Defining dependency "power" 00:02:38.851 Message: lib/reorder: Defining dependency "reorder" 00:02:38.851 Message: lib/security: Defining dependency "security" 00:02:38.851 Has header "linux/userfaultfd.h" : YES 00:02:38.851 Has header "linux/vduse.h" : YES 00:02:38.851 Message: lib/vhost: Defining dependency "vhost" 00:02:38.851 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:38.851 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.851 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.851 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.851 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:38.851 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:38.851 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:38.851 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:38.851 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:38.851 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:38.851 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:38.851 Configuring doxy-api-html.conf using configuration 00:02:38.851 Configuring doxy-api-man.conf using configuration 00:02:38.851 Program mandb found: YES (/usr/bin/mandb) 00:02:38.851 Program sphinx-build found: NO 00:02:38.851 Configuring rte_build_config.h using configuration 00:02:38.851 Message: 00:02:38.851 ================= 00:02:38.851 Applications Enabled 00:02:38.851 ================= 00:02:38.851 00:02:38.851 apps: 00:02:38.851 00:02:38.851 00:02:38.851 Message: 00:02:38.851 ================= 00:02:38.851 Libraries Enabled 00:02:38.851 ================= 00:02:38.851 00:02:38.851 libs: 00:02:38.851 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.851 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:38.851 cryptodev, dmadev, power, reorder, security, vhost, 00:02:38.851 00:02:38.851 Message: 00:02:38.851 =============== 00:02:38.851 Drivers Enabled 00:02:38.851 =============== 00:02:38.851 00:02:38.851 common: 00:02:38.851 00:02:38.851 bus: 00:02:38.851 pci, vdev, 00:02:38.851 mempool: 00:02:38.851 ring, 00:02:38.851 dma: 00:02:38.851 00:02:38.851 net: 00:02:38.851 00:02:38.851 crypto: 00:02:38.851 00:02:38.851 compress: 00:02:38.851 00:02:38.851 vdpa: 00:02:38.851 00:02:38.851 00:02:38.851 Message: 00:02:38.851 ================= 00:02:38.851 Content Skipped 00:02:38.851 ================= 00:02:38.851 00:02:38.851 apps: 00:02:38.851 dumpcap: explicitly disabled via build config 00:02:38.851 graph: explicitly disabled via build config 00:02:38.851 pdump: explicitly disabled via build config 00:02:38.851 proc-info: explicitly disabled via build config 00:02:38.851 test-acl: explicitly disabled via build config 00:02:38.851 test-bbdev: explicitly disabled via build config 00:02:38.851 test-cmdline: explicitly disabled via build config 00:02:38.851 test-compress-perf: explicitly disabled via build config 00:02:38.851 test-crypto-perf: explicitly disabled via build config 00:02:38.851 test-dma-perf: explicitly disabled via build config 00:02:38.851 test-eventdev: explicitly disabled via build config 00:02:38.851 test-fib: explicitly disabled via build config 00:02:38.851 test-flow-perf: explicitly disabled via build config 00:02:38.851 test-gpudev: explicitly disabled via build config 00:02:38.851 test-mldev: explicitly disabled via build config 00:02:38.851 test-pipeline: explicitly disabled via build config 00:02:38.851 test-pmd: explicitly disabled via build config 00:02:38.851 test-regex: explicitly disabled via build config 00:02:38.851 test-sad: explicitly disabled via build config 00:02:38.851 test-security-perf: explicitly disabled via build config 00:02:38.851 00:02:38.851 libs: 00:02:38.851 argparse: explicitly disabled via build config 00:02:38.852 metrics: explicitly disabled via build config 00:02:38.852 acl: explicitly disabled via build config 00:02:38.852 bbdev: explicitly disabled via build config 00:02:38.852 bitratestats: explicitly disabled via build config 00:02:38.852 bpf: explicitly disabled via build config 00:02:38.852 cfgfile: explicitly disabled via build config 00:02:38.852 distributor: explicitly disabled via build config 00:02:38.852 efd: explicitly disabled via build config 00:02:38.852 eventdev: explicitly disabled via build config 00:02:38.852 dispatcher: explicitly disabled via build config 00:02:38.852 gpudev: explicitly disabled via build config 00:02:38.852 gro: explicitly disabled via build config 00:02:38.852 gso: explicitly disabled via build config 00:02:38.852 ip_frag: explicitly disabled via build config 00:02:38.852 jobstats: explicitly disabled via build config 00:02:38.852 latencystats: explicitly disabled via build config 00:02:38.852 lpm: explicitly disabled via build config 00:02:38.852 member: explicitly disabled via build config 00:02:38.852 pcapng: explicitly disabled via build config 00:02:38.852 rawdev: explicitly disabled via build config 00:02:38.852 regexdev: explicitly disabled via build config 00:02:38.852 mldev: explicitly disabled via build config 00:02:38.852 rib: explicitly disabled via build config 00:02:38.852 sched: explicitly disabled via build config 00:02:38.852 stack: explicitly disabled via build config 00:02:38.852 ipsec: explicitly disabled via build config 00:02:38.852 pdcp: explicitly disabled via build config 00:02:38.852 fib: explicitly disabled via build config 00:02:38.852 port: explicitly disabled via build config 00:02:38.852 pdump: explicitly disabled via build config 00:02:38.852 table: explicitly disabled via build config 00:02:38.852 pipeline: explicitly disabled via build config 00:02:38.852 graph: explicitly disabled via build config 00:02:38.852 node: explicitly disabled via build config 00:02:38.852 00:02:38.852 drivers: 00:02:38.852 common/cpt: not in enabled drivers build config 00:02:38.852 common/dpaax: not in enabled drivers build config 00:02:38.852 common/iavf: not in enabled drivers build config 00:02:38.852 common/idpf: not in enabled drivers build config 00:02:38.852 common/ionic: not in enabled drivers build config 00:02:38.852 common/mvep: not in enabled drivers build config 00:02:38.852 common/octeontx: not in enabled drivers build config 00:02:38.852 bus/auxiliary: not in enabled drivers build config 00:02:38.852 bus/cdx: not in enabled drivers build config 00:02:38.852 bus/dpaa: not in enabled drivers build config 00:02:38.852 bus/fslmc: not in enabled drivers build config 00:02:38.852 bus/ifpga: not in enabled drivers build config 00:02:38.852 bus/platform: not in enabled drivers build config 00:02:38.852 bus/uacce: not in enabled drivers build config 00:02:38.852 bus/vmbus: not in enabled drivers build config 00:02:38.852 common/cnxk: not in enabled drivers build config 00:02:38.852 common/mlx5: not in enabled drivers build config 00:02:38.852 common/nfp: not in enabled drivers build config 00:02:38.852 common/nitrox: not in enabled drivers build config 00:02:38.852 common/qat: not in enabled drivers build config 00:02:38.852 common/sfc_efx: not in enabled drivers build config 00:02:38.852 mempool/bucket: not in enabled drivers build config 00:02:38.852 mempool/cnxk: not in enabled drivers build config 00:02:38.852 mempool/dpaa: not in enabled drivers build config 00:02:38.852 mempool/dpaa2: not in enabled drivers build config 00:02:38.852 mempool/octeontx: not in enabled drivers build config 00:02:38.852 mempool/stack: not in enabled drivers build config 00:02:38.852 dma/cnxk: not in enabled drivers build config 00:02:38.852 dma/dpaa: not in enabled drivers build config 00:02:38.852 dma/dpaa2: not in enabled drivers build config 00:02:38.852 dma/hisilicon: not in enabled drivers build config 00:02:38.852 dma/idxd: not in enabled drivers build config 00:02:38.852 dma/ioat: not in enabled drivers build config 00:02:38.852 dma/skeleton: not in enabled drivers build config 00:02:38.852 net/af_packet: not in enabled drivers build config 00:02:38.852 net/af_xdp: not in enabled drivers build config 00:02:38.852 net/ark: not in enabled drivers build config 00:02:38.852 net/atlantic: not in enabled drivers build config 00:02:38.852 net/avp: not in enabled drivers build config 00:02:38.852 net/axgbe: not in enabled drivers build config 00:02:38.852 net/bnx2x: not in enabled drivers build config 00:02:38.852 net/bnxt: not in enabled drivers build config 00:02:38.852 net/bonding: not in enabled drivers build config 00:02:38.852 net/cnxk: not in enabled drivers build config 00:02:38.852 net/cpfl: not in enabled drivers build config 00:02:38.852 net/cxgbe: not in enabled drivers build config 00:02:38.852 net/dpaa: not in enabled drivers build config 00:02:38.852 net/dpaa2: not in enabled drivers build config 00:02:38.852 net/e1000: not in enabled drivers build config 00:02:38.852 net/ena: not in enabled drivers build config 00:02:38.852 net/enetc: not in enabled drivers build config 00:02:38.852 net/enetfec: not in enabled drivers build config 00:02:38.852 net/enic: not in enabled drivers build config 00:02:38.852 net/failsafe: not in enabled drivers build config 00:02:38.852 net/fm10k: not in enabled drivers build config 00:02:38.852 net/gve: not in enabled drivers build config 00:02:38.852 net/hinic: not in enabled drivers build config 00:02:38.852 net/hns3: not in enabled drivers build config 00:02:38.852 net/i40e: not in enabled drivers build config 00:02:38.852 net/iavf: not in enabled drivers build config 00:02:38.852 net/ice: not in enabled drivers build config 00:02:38.852 net/idpf: not in enabled drivers build config 00:02:38.852 net/igc: not in enabled drivers build config 00:02:38.852 net/ionic: not in enabled drivers build config 00:02:38.852 net/ipn3ke: not in enabled drivers build config 00:02:38.852 net/ixgbe: not in enabled drivers build config 00:02:38.852 net/mana: not in enabled drivers build config 00:02:38.852 net/memif: not in enabled drivers build config 00:02:38.852 net/mlx4: not in enabled drivers build config 00:02:38.852 net/mlx5: not in enabled drivers build config 00:02:38.852 net/mvneta: not in enabled drivers build config 00:02:38.852 net/mvpp2: not in enabled drivers build config 00:02:38.852 net/netvsc: not in enabled drivers build config 00:02:38.852 net/nfb: not in enabled drivers build config 00:02:38.852 net/nfp: not in enabled drivers build config 00:02:38.852 net/ngbe: not in enabled drivers build config 00:02:38.852 net/null: not in enabled drivers build config 00:02:38.852 net/octeontx: not in enabled drivers build config 00:02:38.852 net/octeon_ep: not in enabled drivers build config 00:02:38.852 net/pcap: not in enabled drivers build config 00:02:38.852 net/pfe: not in enabled drivers build config 00:02:38.852 net/qede: not in enabled drivers build config 00:02:38.852 net/ring: not in enabled drivers build config 00:02:38.852 net/sfc: not in enabled drivers build config 00:02:38.852 net/softnic: not in enabled drivers build config 00:02:38.852 net/tap: not in enabled drivers build config 00:02:38.852 net/thunderx: not in enabled drivers build config 00:02:38.852 net/txgbe: not in enabled drivers build config 00:02:38.852 net/vdev_netvsc: not in enabled drivers build config 00:02:38.852 net/vhost: not in enabled drivers build config 00:02:38.852 net/virtio: not in enabled drivers build config 00:02:38.852 net/vmxnet3: not in enabled drivers build config 00:02:38.852 raw/*: missing internal dependency, "rawdev" 00:02:38.852 crypto/armv8: not in enabled drivers build config 00:02:38.852 crypto/bcmfs: not in enabled drivers build config 00:02:38.852 crypto/caam_jr: not in enabled drivers build config 00:02:38.852 crypto/ccp: not in enabled drivers build config 00:02:38.852 crypto/cnxk: not in enabled drivers build config 00:02:38.852 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.852 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.852 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.852 crypto/mlx5: not in enabled drivers build config 00:02:38.852 crypto/mvsam: not in enabled drivers build config 00:02:38.852 crypto/nitrox: not in enabled drivers build config 00:02:38.852 crypto/null: not in enabled drivers build config 00:02:38.852 crypto/octeontx: not in enabled drivers build config 00:02:38.852 crypto/openssl: not in enabled drivers build config 00:02:38.852 crypto/scheduler: not in enabled drivers build config 00:02:38.852 crypto/uadk: not in enabled drivers build config 00:02:38.852 crypto/virtio: not in enabled drivers build config 00:02:38.852 compress/isal: not in enabled drivers build config 00:02:38.852 compress/mlx5: not in enabled drivers build config 00:02:38.852 compress/nitrox: not in enabled drivers build config 00:02:38.852 compress/octeontx: not in enabled drivers build config 00:02:38.852 compress/zlib: not in enabled drivers build config 00:02:38.852 regex/*: missing internal dependency, "regexdev" 00:02:38.852 ml/*: missing internal dependency, "mldev" 00:02:38.852 vdpa/ifc: not in enabled drivers build config 00:02:38.852 vdpa/mlx5: not in enabled drivers build config 00:02:38.852 vdpa/nfp: not in enabled drivers build config 00:02:38.852 vdpa/sfc: not in enabled drivers build config 00:02:38.852 event/*: missing internal dependency, "eventdev" 00:02:38.852 baseband/*: missing internal dependency, "bbdev" 00:02:38.852 gpu/*: missing internal dependency, "gpudev" 00:02:38.852 00:02:38.852 00:02:38.852 Build targets in project: 85 00:02:38.852 00:02:38.852 DPDK 24.03.0 00:02:38.852 00:02:38.852 User defined options 00:02:38.852 buildtype : debug 00:02:38.852 default_library : static 00:02:38.852 libdir : lib 00:02:38.852 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:38.852 c_args : -fPIC -Werror 00:02:38.852 c_link_args : 00:02:38.852 cpu_instruction_set: native 00:02:38.852 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:38.852 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:38.852 enable_docs : false 00:02:38.852 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:38.852 enable_kmods : false 00:02:38.852 max_lcores : 128 00:02:38.852 tests : false 00:02:38.852 00:02:38.852 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.852 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:38.852 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.852 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.852 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.852 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.852 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.852 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.853 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.853 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.853 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.853 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.853 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.853 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.853 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.853 [14/268] Linking static target lib/librte_kvargs.a 00:02:38.853 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.853 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.853 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.853 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.853 [19/268] Linking static target lib/librte_log.a 00:02:39.113 [20/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.113 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.113 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.113 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.113 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.113 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.113 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.113 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.113 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.113 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.113 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.113 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.113 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.113 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.113 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.113 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.113 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.113 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.113 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.113 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.113 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.113 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.113 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.113 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.113 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.113 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.113 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.113 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.113 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.113 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.113 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.113 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.113 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.113 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.113 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.113 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.113 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.113 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.113 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.113 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.113 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.113 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.113 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.113 [63/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.113 [64/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.113 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.113 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.374 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.374 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.374 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.374 [70/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.374 [71/268] Linking static target lib/librte_ring.a 00:02:39.374 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.374 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.374 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.374 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.374 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.374 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.374 [78/268] Linking static target lib/librte_telemetry.a 00:02:39.374 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.374 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.374 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.374 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.374 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.374 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.374 [85/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.374 [86/268] Linking static target lib/librte_pci.a 00:02:39.374 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.374 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.374 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.374 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.374 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.374 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.374 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.374 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.374 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.374 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.374 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.374 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.374 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.375 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.375 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.375 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.375 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.375 [104/268] Linking static target lib/librte_mempool.a 00:02:39.375 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.375 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.375 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.375 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.375 [109/268] Linking static target lib/librte_rcu.a 00:02:39.375 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.375 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.375 [112/268] Linking static target lib/librte_eal.a 00:02:39.375 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.637 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.637 [115/268] Linking static target lib/librte_mbuf.a 00:02:39.637 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.637 [117/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.637 [118/268] Linking static target lib/librte_net.a 00:02:39.637 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.637 [120/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.637 [121/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.637 [122/268] Linking static target lib/librte_meter.a 00:02:39.637 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.637 [124/268] Linking target lib/librte_log.so.24.1 00:02:39.895 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.895 [126/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:39.895 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.895 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.895 [129/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.895 [130/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.895 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.895 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.895 [133/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.895 [134/268] Linking static target lib/librte_timer.a 00:02:39.895 [135/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.895 [136/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.895 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.895 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.895 [139/268] Linking static target lib/librte_cmdline.a 00:02:39.895 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.895 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.895 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.895 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.895 [144/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.895 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.895 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.895 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.895 [148/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.895 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.895 [150/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:39.895 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.895 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.895 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.895 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.895 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.895 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.895 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.895 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.895 [159/268] Linking static target lib/librte_dmadev.a 00:02:39.895 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.895 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:39.895 [162/268] Linking static target lib/librte_compressdev.a 00:02:39.895 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.895 [164/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.895 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.895 [166/268] Linking target lib/librte_kvargs.so.24.1 00:02:39.895 [167/268] Linking target lib/librte_telemetry.so.24.1 00:02:39.895 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.895 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.895 [170/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.895 [171/268] Linking static target lib/librte_power.a 00:02:39.895 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.895 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.895 [174/268] Linking static target lib/librte_reorder.a 00:02:40.153 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.153 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.153 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.153 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.153 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.153 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.153 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.153 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.153 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.153 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.153 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.153 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.153 [187/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.153 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:40.153 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:40.153 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:40.153 [191/268] Linking static target lib/librte_security.a 00:02:40.153 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.153 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.153 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.153 [195/268] Linking static target lib/librte_hash.a 00:02:40.153 [196/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.153 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.153 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.153 [199/268] Linking static target lib/librte_cryptodev.a 00:02:40.153 [200/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.153 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.154 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.412 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:40.412 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.412 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.412 [206/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.412 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.412 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.412 [209/268] Linking static target drivers/librte_bus_vdev.a 00:02:40.412 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.412 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:40.412 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.412 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.412 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:40.412 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.412 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.672 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.672 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.672 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.672 [220/268] Linking static target lib/librte_ethdev.a 00:02:40.672 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.672 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.955 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.956 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.307 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.307 [226/268] Linking static target lib/librte_vhost.a 00:02:41.307 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.307 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.307 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.730 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.299 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.426 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.365 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.623 [234/268] Linking target lib/librte_eal.so.24.1 00:02:52.623 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:52.623 [236/268] Linking target lib/librte_meter.so.24.1 00:02:52.623 [237/268] Linking target lib/librte_ring.so.24.1 00:02:52.623 [238/268] Linking target lib/librte_timer.so.24.1 00:02:52.623 [239/268] Linking target lib/librte_pci.so.24.1 00:02:52.623 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:52.623 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:52.883 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:52.883 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:52.883 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:52.883 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:52.883 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:52.883 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:52.883 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:52.883 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.142 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.142 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.142 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.142 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:53.142 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.401 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:53.401 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:53.401 [257/268] Linking target lib/librte_net.so.24.1 00:02:53.401 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:53.402 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.402 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.661 [261/268] Linking target lib/librte_hash.so.24.1 00:02:53.661 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:53.661 [263/268] Linking target lib/librte_security.so.24.1 00:02:53.661 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:53.661 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.661 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.661 [267/268] Linking target lib/librte_power.so.24.1 00:02:53.920 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:53.920 INFO: autodetecting backend as ninja 00:02:53.920 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:54.858 CC lib/ut_mock/mock.o 00:02:54.858 CC lib/log/log.o 00:02:54.858 CC lib/log/log_flags.o 00:02:54.858 CC lib/log/log_deprecated.o 00:02:54.858 CC lib/ut/ut.o 00:02:54.858 LIB libspdk_ut_mock.a 00:02:54.858 LIB libspdk_ut.a 00:02:54.858 LIB libspdk_log.a 00:02:55.427 CC lib/util/bit_array.o 00:02:55.427 CC lib/util/base64.o 00:02:55.427 CC lib/util/cpuset.o 00:02:55.427 CC lib/util/crc16.o 00:02:55.427 CC lib/util/crc32.o 00:02:55.427 CC lib/util/crc32_ieee.o 00:02:55.427 CC lib/util/crc32c.o 00:02:55.427 CC lib/util/crc64.o 00:02:55.427 CC lib/dma/dma.o 00:02:55.427 CC lib/ioat/ioat.o 00:02:55.427 CC lib/util/dif.o 00:02:55.427 CC lib/util/fd.o 00:02:55.427 CC lib/util/fd_group.o 00:02:55.427 CC lib/util/file.o 00:02:55.427 CC lib/util/hexlify.o 00:02:55.427 CC lib/util/iov.o 00:02:55.427 CC lib/util/math.o 00:02:55.427 CC lib/util/net.o 00:02:55.427 CC lib/util/pipe.o 00:02:55.427 CXX lib/trace_parser/trace.o 00:02:55.427 CC lib/util/strerror_tls.o 00:02:55.427 CC lib/util/string.o 00:02:55.427 CC lib/util/uuid.o 00:02:55.427 CC lib/util/xor.o 00:02:55.427 CC lib/util/zipf.o 00:02:55.427 CC lib/util/md5.o 00:02:55.427 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.427 CC lib/vfio_user/host/vfio_user.o 00:02:55.427 LIB libspdk_dma.a 00:02:55.427 LIB libspdk_ioat.a 00:02:55.687 LIB libspdk_vfio_user.a 00:02:55.687 LIB libspdk_util.a 00:02:55.687 LIB libspdk_trace_parser.a 00:02:55.946 CC lib/json/json_parse.o 00:02:55.946 CC lib/json/json_util.o 00:02:55.946 CC lib/idxd/idxd.o 00:02:55.946 CC lib/idxd/idxd_user.o 00:02:55.946 CC lib/json/json_write.o 00:02:55.946 CC lib/conf/conf.o 00:02:55.946 CC lib/idxd/idxd_kernel.o 00:02:55.946 CC lib/env_dpdk/env.o 00:02:55.946 CC lib/env_dpdk/memory.o 00:02:55.946 CC lib/env_dpdk/pci.o 00:02:55.946 CC lib/env_dpdk/init.o 00:02:55.946 CC lib/env_dpdk/pci_ioat.o 00:02:55.946 CC lib/env_dpdk/threads.o 00:02:55.946 CC lib/env_dpdk/pci_vmd.o 00:02:55.946 CC lib/env_dpdk/pci_virtio.o 00:02:55.946 CC lib/env_dpdk/pci_idxd.o 00:02:55.946 CC lib/env_dpdk/pci_event.o 00:02:55.946 CC lib/env_dpdk/sigbus_handler.o 00:02:55.946 CC lib/rdma_utils/rdma_utils.o 00:02:55.946 CC lib/env_dpdk/pci_dpdk.o 00:02:55.946 CC lib/vmd/vmd.o 00:02:55.946 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.946 CC lib/vmd/led.o 00:02:55.946 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.946 CC lib/rdma_provider/common.o 00:02:55.946 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.946 LIB libspdk_rdma_provider.a 00:02:56.206 LIB libspdk_conf.a 00:02:56.206 LIB libspdk_json.a 00:02:56.206 LIB libspdk_rdma_utils.a 00:02:56.206 LIB libspdk_idxd.a 00:02:56.206 LIB libspdk_vmd.a 00:02:56.465 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.465 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.465 CC lib/jsonrpc/jsonrpc_client.o 00:02:56.465 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.465 LIB libspdk_jsonrpc.a 00:02:57.034 LIB libspdk_env_dpdk.a 00:02:57.034 CC lib/rpc/rpc.o 00:02:57.034 LIB libspdk_rpc.a 00:02:57.293 CC lib/trace/trace.o 00:02:57.293 CC lib/keyring/keyring.o 00:02:57.293 CC lib/notify/notify.o 00:02:57.293 CC lib/trace/trace_flags.o 00:02:57.293 CC lib/notify/notify_rpc.o 00:02:57.293 CC lib/keyring/keyring_rpc.o 00:02:57.293 CC lib/trace/trace_rpc.o 00:02:57.552 LIB libspdk_notify.a 00:02:57.552 LIB libspdk_trace.a 00:02:57.552 LIB libspdk_keyring.a 00:02:57.811 CC lib/sock/sock.o 00:02:57.811 CC lib/sock/sock_rpc.o 00:02:57.811 CC lib/thread/thread.o 00:02:57.811 CC lib/thread/iobuf.o 00:02:58.071 LIB libspdk_sock.a 00:02:58.331 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.331 CC lib/nvme/nvme_ctrlr.o 00:02:58.331 CC lib/nvme/nvme_fabric.o 00:02:58.331 CC lib/nvme/nvme_ns_cmd.o 00:02:58.331 CC lib/nvme/nvme_ns.o 00:02:58.331 CC lib/nvme/nvme_pcie_common.o 00:02:58.331 CC lib/nvme/nvme_pcie.o 00:02:58.331 CC lib/nvme/nvme_qpair.o 00:02:58.331 CC lib/nvme/nvme.o 00:02:58.331 CC lib/nvme/nvme_quirks.o 00:02:58.331 CC lib/nvme/nvme_transport.o 00:02:58.331 CC lib/nvme/nvme_discovery.o 00:02:58.331 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:58.331 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.331 CC lib/nvme/nvme_tcp.o 00:02:58.331 CC lib/nvme/nvme_opal.o 00:02:58.331 CC lib/nvme/nvme_io_msg.o 00:02:58.331 CC lib/nvme/nvme_poll_group.o 00:02:58.331 CC lib/nvme/nvme_zns.o 00:02:58.331 CC lib/nvme/nvme_stubs.o 00:02:58.331 CC lib/nvme/nvme_auth.o 00:02:58.331 CC lib/nvme/nvme_cuse.o 00:02:58.331 CC lib/nvme/nvme_vfio_user.o 00:02:58.331 CC lib/nvme/nvme_rdma.o 00:02:58.590 LIB libspdk_thread.a 00:02:58.849 CC lib/vfu_tgt/tgt_endpoint.o 00:02:58.849 CC lib/vfu_tgt/tgt_rpc.o 00:02:58.849 CC lib/accel/accel.o 00:02:58.849 CC lib/accel/accel_rpc.o 00:02:58.849 CC lib/accel/accel_sw.o 00:02:58.849 CC lib/blob/request.o 00:02:58.849 CC lib/blob/blobstore.o 00:02:58.849 CC lib/fsdev/fsdev.o 00:02:58.849 CC lib/virtio/virtio.o 00:02:58.849 CC lib/fsdev/fsdev_rpc.o 00:02:58.849 CC lib/virtio/virtio_vhost_user.o 00:02:58.849 CC lib/blob/blob_bs_dev.o 00:02:58.849 CC lib/fsdev/fsdev_io.o 00:02:58.849 CC lib/blob/zeroes.o 00:02:58.849 CC lib/virtio/virtio_vfio_user.o 00:02:58.849 CC lib/virtio/virtio_pci.o 00:02:58.849 CC lib/init/json_config.o 00:02:58.849 CC lib/init/subsystem.o 00:02:58.849 CC lib/init/subsystem_rpc.o 00:02:58.849 CC lib/init/rpc.o 00:02:59.108 LIB libspdk_init.a 00:02:59.108 LIB libspdk_vfu_tgt.a 00:02:59.108 LIB libspdk_virtio.a 00:02:59.367 LIB libspdk_fsdev.a 00:02:59.367 CC lib/event/app.o 00:02:59.367 CC lib/event/reactor.o 00:02:59.367 CC lib/event/log_rpc.o 00:02:59.367 CC lib/event/app_rpc.o 00:02:59.367 CC lib/event/scheduler_static.o 00:02:59.626 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:59.626 LIB libspdk_event.a 00:02:59.626 LIB libspdk_accel.a 00:02:59.885 LIB libspdk_nvme.a 00:03:00.145 CC lib/bdev/bdev.o 00:03:00.145 CC lib/bdev/bdev_rpc.o 00:03:00.145 CC lib/bdev/bdev_zone.o 00:03:00.145 CC lib/bdev/part.o 00:03:00.145 CC lib/bdev/scsi_nvme.o 00:03:00.145 LIB libspdk_fuse_dispatcher.a 00:03:00.711 LIB libspdk_blob.a 00:03:00.970 CC lib/blobfs/blobfs.o 00:03:00.970 CC lib/lvol/lvol.o 00:03:00.970 CC lib/blobfs/tree.o 00:03:01.538 LIB libspdk_lvol.a 00:03:01.538 LIB libspdk_blobfs.a 00:03:01.798 LIB libspdk_bdev.a 00:03:02.061 CC lib/nvmf/ctrlr.o 00:03:02.061 CC lib/nvmf/ctrlr_bdev.o 00:03:02.061 CC lib/nvmf/ctrlr_discovery.o 00:03:02.061 CC lib/ftl/ftl_core.o 00:03:02.061 CC lib/nvmf/subsystem.o 00:03:02.061 CC lib/ftl/ftl_init.o 00:03:02.061 CC lib/nvmf/nvmf.o 00:03:02.061 CC lib/scsi/dev.o 00:03:02.061 CC lib/ftl/ftl_layout.o 00:03:02.061 CC lib/ftl/ftl_debug.o 00:03:02.061 CC lib/ublk/ublk.o 00:03:02.061 CC lib/scsi/lun.o 00:03:02.061 CC lib/nvmf/nvmf_rpc.o 00:03:02.061 CC lib/ublk/ublk_rpc.o 00:03:02.061 CC lib/ftl/ftl_io.o 00:03:02.061 CC lib/scsi/port.o 00:03:02.061 CC lib/ftl/ftl_sb.o 00:03:02.061 CC lib/nvmf/transport.o 00:03:02.061 CC lib/scsi/scsi.o 00:03:02.061 CC lib/ftl/ftl_l2p.o 00:03:02.061 CC lib/nvmf/tcp.o 00:03:02.061 CC lib/ftl/ftl_l2p_flat.o 00:03:02.061 CC lib/nbd/nbd_rpc.o 00:03:02.061 CC lib/nbd/nbd.o 00:03:02.061 CC lib/nvmf/stubs.o 00:03:02.061 CC lib/ftl/ftl_nv_cache.o 00:03:02.061 CC lib/scsi/scsi_pr.o 00:03:02.061 CC lib/scsi/scsi_bdev.o 00:03:02.061 CC lib/nvmf/mdns_server.o 00:03:02.061 CC lib/scsi/scsi_rpc.o 00:03:02.062 CC lib/ftl/ftl_band.o 00:03:02.062 CC lib/nvmf/vfio_user.o 00:03:02.062 CC lib/nvmf/rdma.o 00:03:02.062 CC lib/ftl/ftl_band_ops.o 00:03:02.062 CC lib/nvmf/auth.o 00:03:02.062 CC lib/scsi/task.o 00:03:02.062 CC lib/ftl/ftl_writer.o 00:03:02.062 CC lib/ftl/ftl_rq.o 00:03:02.062 CC lib/ftl/ftl_reloc.o 00:03:02.062 CC lib/ftl/ftl_l2p_cache.o 00:03:02.062 CC lib/ftl/ftl_p2l.o 00:03:02.062 CC lib/ftl/ftl_p2l_log.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.062 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.062 CC lib/ftl/utils/ftl_md.o 00:03:02.062 CC lib/ftl/utils/ftl_conf.o 00:03:02.062 CC lib/ftl/utils/ftl_mempool.o 00:03:02.062 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.062 CC lib/ftl/utils/ftl_property.o 00:03:02.062 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.062 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.062 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.062 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.062 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.062 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.062 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.320 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.320 CC lib/ftl/base/ftl_base_dev.o 00:03:02.320 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.320 CC lib/ftl/ftl_trace.o 00:03:02.579 LIB libspdk_nbd.a 00:03:02.579 LIB libspdk_scsi.a 00:03:02.838 LIB libspdk_ublk.a 00:03:02.838 LIB libspdk_ftl.a 00:03:03.098 CC lib/vhost/vhost.o 00:03:03.098 CC lib/vhost/vhost_rpc.o 00:03:03.098 CC lib/vhost/vhost_scsi.o 00:03:03.098 CC lib/iscsi/conn.o 00:03:03.098 CC lib/vhost/vhost_blk.o 00:03:03.098 CC lib/vhost/rte_vhost_user.o 00:03:03.098 CC lib/iscsi/init_grp.o 00:03:03.098 CC lib/iscsi/iscsi.o 00:03:03.098 CC lib/iscsi/param.o 00:03:03.098 CC lib/iscsi/portal_grp.o 00:03:03.098 CC lib/iscsi/tgt_node.o 00:03:03.098 CC lib/iscsi/iscsi_subsystem.o 00:03:03.098 CC lib/iscsi/iscsi_rpc.o 00:03:03.098 CC lib/iscsi/task.o 00:03:03.357 LIB libspdk_nvmf.a 00:03:03.616 LIB libspdk_vhost.a 00:03:03.876 LIB libspdk_iscsi.a 00:03:04.135 CC module/vfu_device/vfu_virtio_blk.o 00:03:04.135 CC module/vfu_device/vfu_virtio.o 00:03:04.135 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.135 CC module/vfu_device/vfu_virtio_fs.o 00:03:04.135 CC module/vfu_device/vfu_virtio_scsi.o 00:03:04.135 CC module/vfu_device/vfu_virtio_rpc.o 00:03:04.394 CC module/accel/ioat/accel_ioat.o 00:03:04.394 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.394 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.394 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.394 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.394 CC module/accel/error/accel_error_rpc.o 00:03:04.394 CC module/accel/error/accel_error.o 00:03:04.394 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.394 CC module/accel/dsa/accel_dsa.o 00:03:04.394 LIB libspdk_env_dpdk_rpc.a 00:03:04.394 CC module/sock/posix/posix.o 00:03:04.394 CC module/keyring/file/keyring.o 00:03:04.394 CC module/blob/bdev/blob_bdev.o 00:03:04.394 CC module/keyring/file/keyring_rpc.o 00:03:04.394 CC module/accel/iaa/accel_iaa.o 00:03:04.394 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.394 CC module/keyring/linux/keyring.o 00:03:04.394 CC module/keyring/linux/keyring_rpc.o 00:03:04.394 CC module/fsdev/aio/fsdev_aio.o 00:03:04.394 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:04.394 CC module/fsdev/aio/linux_aio_mgr.o 00:03:04.394 LIB libspdk_scheduler_gscheduler.a 00:03:04.394 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.394 LIB libspdk_keyring_file.a 00:03:04.394 LIB libspdk_keyring_linux.a 00:03:04.394 LIB libspdk_accel_error.a 00:03:04.394 LIB libspdk_accel_ioat.a 00:03:04.394 LIB libspdk_scheduler_dynamic.a 00:03:04.654 LIB libspdk_accel_iaa.a 00:03:04.654 LIB libspdk_blob_bdev.a 00:03:04.654 LIB libspdk_accel_dsa.a 00:03:04.654 LIB libspdk_vfu_device.a 00:03:04.912 LIB libspdk_sock_posix.a 00:03:04.912 LIB libspdk_fsdev_aio.a 00:03:04.912 CC module/bdev/ftl/bdev_ftl.o 00:03:04.912 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.912 CC module/bdev/null/bdev_null.o 00:03:04.912 CC module/bdev/null/bdev_null_rpc.o 00:03:04.912 CC module/bdev/error/vbdev_error.o 00:03:04.912 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.912 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.912 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.912 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.912 CC module/bdev/nvme/bdev_nvme.o 00:03:04.912 CC module/bdev/nvme/nvme_rpc.o 00:03:04.912 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.912 CC module/bdev/nvme/vbdev_opal.o 00:03:04.912 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.912 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.912 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.912 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.913 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.913 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.913 CC module/bdev/gpt/gpt.o 00:03:04.913 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.913 CC module/bdev/malloc/bdev_malloc.o 00:03:04.913 CC module/bdev/delay/vbdev_delay.o 00:03:04.913 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.913 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.913 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.913 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.913 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.913 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.913 CC module/bdev/aio/bdev_aio.o 00:03:04.913 CC module/bdev/raid/bdev_raid.o 00:03:04.913 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.913 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.913 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.913 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.913 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.913 CC module/bdev/raid/raid0.o 00:03:04.913 CC module/bdev/raid/concat.o 00:03:04.913 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.913 CC module/bdev/raid/raid1.o 00:03:04.913 CC module/bdev/split/vbdev_split.o 00:03:04.913 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.172 LIB libspdk_bdev_split.a 00:03:05.172 LIB libspdk_bdev_error.a 00:03:05.172 LIB libspdk_bdev_null.a 00:03:05.172 LIB libspdk_blobfs_bdev.a 00:03:05.172 LIB libspdk_bdev_ftl.a 00:03:05.172 LIB libspdk_bdev_gpt.a 00:03:05.172 LIB libspdk_bdev_zone_block.a 00:03:05.172 LIB libspdk_bdev_passthru.a 00:03:05.172 LIB libspdk_bdev_iscsi.a 00:03:05.172 LIB libspdk_bdev_delay.a 00:03:05.172 LIB libspdk_bdev_malloc.a 00:03:05.172 LIB libspdk_bdev_aio.a 00:03:05.431 LIB libspdk_bdev_lvol.a 00:03:05.431 LIB libspdk_bdev_virtio.a 00:03:05.691 LIB libspdk_bdev_raid.a 00:03:06.629 LIB libspdk_bdev_nvme.a 00:03:06.889 CC module/event/subsystems/vmd/vmd.o 00:03:06.889 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:06.889 CC module/event/subsystems/keyring/keyring.o 00:03:06.889 CC module/event/subsystems/iobuf/iobuf.o 00:03:06.889 CC module/event/subsystems/sock/sock.o 00:03:06.889 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:06.889 CC module/event/subsystems/scheduler/scheduler.o 00:03:06.889 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:07.149 CC module/event/subsystems/fsdev/fsdev.o 00:03:07.149 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.149 LIB libspdk_event_keyring.a 00:03:07.149 LIB libspdk_event_vmd.a 00:03:07.149 LIB libspdk_event_sock.a 00:03:07.149 LIB libspdk_event_scheduler.a 00:03:07.149 LIB libspdk_event_fsdev.a 00:03:07.149 LIB libspdk_event_iobuf.a 00:03:07.149 LIB libspdk_event_vhost_blk.a 00:03:07.149 LIB libspdk_event_vfu_tgt.a 00:03:07.409 CC module/event/subsystems/accel/accel.o 00:03:07.668 LIB libspdk_event_accel.a 00:03:07.927 CC module/event/subsystems/bdev/bdev.o 00:03:07.927 LIB libspdk_event_bdev.a 00:03:08.186 CC module/event/subsystems/nbd/nbd.o 00:03:08.186 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.186 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.186 CC module/event/subsystems/scsi/scsi.o 00:03:08.446 CC module/event/subsystems/ublk/ublk.o 00:03:08.446 LIB libspdk_event_nbd.a 00:03:08.446 LIB libspdk_event_ublk.a 00:03:08.446 LIB libspdk_event_scsi.a 00:03:08.446 LIB libspdk_event_nvmf.a 00:03:08.706 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:08.706 CC module/event/subsystems/iscsi/iscsi.o 00:03:08.966 LIB libspdk_event_vhost_scsi.a 00:03:08.966 LIB libspdk_event_iscsi.a 00:03:09.232 CC app/spdk_lspci/spdk_lspci.o 00:03:09.232 TEST_HEADER include/spdk/accel.h 00:03:09.232 CC app/spdk_nvme_perf/perf.o 00:03:09.232 TEST_HEADER include/spdk/accel_module.h 00:03:09.232 TEST_HEADER include/spdk/bdev.h 00:03:09.232 TEST_HEADER include/spdk/barrier.h 00:03:09.232 CC app/trace_record/trace_record.o 00:03:09.232 TEST_HEADER include/spdk/base64.h 00:03:09.232 TEST_HEADER include/spdk/assert.h 00:03:09.232 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.232 TEST_HEADER include/spdk/bdev_module.h 00:03:09.232 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.232 TEST_HEADER include/spdk/bit_array.h 00:03:09.232 TEST_HEADER include/spdk/bit_pool.h 00:03:09.232 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.232 TEST_HEADER include/spdk/blobfs.h 00:03:09.232 TEST_HEADER include/spdk/blob.h 00:03:09.232 TEST_HEADER include/spdk/config.h 00:03:09.232 TEST_HEADER include/spdk/conf.h 00:03:09.232 TEST_HEADER include/spdk/cpuset.h 00:03:09.232 CC app/spdk_nvme_discover/discovery_aer.o 00:03:09.232 TEST_HEADER include/spdk/crc16.h 00:03:09.232 TEST_HEADER include/spdk/crc32.h 00:03:09.232 CC test/rpc_client/rpc_client_test.o 00:03:09.232 TEST_HEADER include/spdk/crc64.h 00:03:09.232 TEST_HEADER include/spdk/dif.h 00:03:09.232 TEST_HEADER include/spdk/dma.h 00:03:09.232 TEST_HEADER include/spdk/endian.h 00:03:09.232 TEST_HEADER include/spdk/env.h 00:03:09.232 TEST_HEADER include/spdk/event.h 00:03:09.232 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.232 CC app/spdk_top/spdk_top.o 00:03:09.232 TEST_HEADER include/spdk/fd_group.h 00:03:09.232 TEST_HEADER include/spdk/fd.h 00:03:09.232 TEST_HEADER include/spdk/file.h 00:03:09.232 CXX app/trace/trace.o 00:03:09.232 TEST_HEADER include/spdk/fsdev.h 00:03:09.232 CC app/spdk_nvme_identify/identify.o 00:03:09.232 TEST_HEADER include/spdk/ftl.h 00:03:09.232 TEST_HEADER include/spdk/fsdev_module.h 00:03:09.232 TEST_HEADER include/spdk/gpt_spec.h 00:03:09.233 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:09.233 TEST_HEADER include/spdk/hexlify.h 00:03:09.233 TEST_HEADER include/spdk/idxd.h 00:03:09.233 TEST_HEADER include/spdk/idxd_spec.h 00:03:09.233 TEST_HEADER include/spdk/histogram_data.h 00:03:09.233 TEST_HEADER include/spdk/init.h 00:03:09.233 TEST_HEADER include/spdk/ioat.h 00:03:09.233 TEST_HEADER include/spdk/iscsi_spec.h 00:03:09.233 TEST_HEADER include/spdk/ioat_spec.h 00:03:09.233 TEST_HEADER include/spdk/json.h 00:03:09.233 TEST_HEADER include/spdk/keyring_module.h 00:03:09.233 TEST_HEADER include/spdk/jsonrpc.h 00:03:09.233 TEST_HEADER include/spdk/keyring.h 00:03:09.233 TEST_HEADER include/spdk/log.h 00:03:09.233 TEST_HEADER include/spdk/likely.h 00:03:09.233 TEST_HEADER include/spdk/md5.h 00:03:09.233 TEST_HEADER include/spdk/lvol.h 00:03:09.233 TEST_HEADER include/spdk/memory.h 00:03:09.233 TEST_HEADER include/spdk/nbd.h 00:03:09.233 TEST_HEADER include/spdk/net.h 00:03:09.233 TEST_HEADER include/spdk/mmio.h 00:03:09.233 TEST_HEADER include/spdk/notify.h 00:03:09.233 TEST_HEADER include/spdk/nvme_intel.h 00:03:09.233 TEST_HEADER include/spdk/nvme.h 00:03:09.233 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:09.233 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:09.233 TEST_HEADER include/spdk/nvme_spec.h 00:03:09.233 TEST_HEADER include/spdk/nvme_zns.h 00:03:09.233 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:09.233 TEST_HEADER include/spdk/nvmf.h 00:03:09.233 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:09.233 TEST_HEADER include/spdk/nvmf_spec.h 00:03:09.233 TEST_HEADER include/spdk/nvmf_transport.h 00:03:09.233 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:09.233 TEST_HEADER include/spdk/opal_spec.h 00:03:09.233 TEST_HEADER include/spdk/pci_ids.h 00:03:09.233 TEST_HEADER include/spdk/opal.h 00:03:09.233 TEST_HEADER include/spdk/queue.h 00:03:09.233 TEST_HEADER include/spdk/pipe.h 00:03:09.233 TEST_HEADER include/spdk/reduce.h 00:03:09.233 TEST_HEADER include/spdk/rpc.h 00:03:09.233 TEST_HEADER include/spdk/scsi.h 00:03:09.233 TEST_HEADER include/spdk/scheduler.h 00:03:09.233 TEST_HEADER include/spdk/scsi_spec.h 00:03:09.233 TEST_HEADER include/spdk/sock.h 00:03:09.233 TEST_HEADER include/spdk/stdinc.h 00:03:09.233 TEST_HEADER include/spdk/thread.h 00:03:09.233 TEST_HEADER include/spdk/string.h 00:03:09.233 TEST_HEADER include/spdk/trace.h 00:03:09.233 TEST_HEADER include/spdk/trace_parser.h 00:03:09.233 TEST_HEADER include/spdk/tree.h 00:03:09.233 TEST_HEADER include/spdk/ublk.h 00:03:09.233 TEST_HEADER include/spdk/util.h 00:03:09.233 TEST_HEADER include/spdk/uuid.h 00:03:09.233 TEST_HEADER include/spdk/version.h 00:03:09.233 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:09.233 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:09.233 TEST_HEADER include/spdk/vhost.h 00:03:09.233 TEST_HEADER include/spdk/vmd.h 00:03:09.233 TEST_HEADER include/spdk/xor.h 00:03:09.233 TEST_HEADER include/spdk/zipf.h 00:03:09.233 CXX test/cpp_headers/accel.o 00:03:09.233 CXX test/cpp_headers/accel_module.o 00:03:09.233 CXX test/cpp_headers/assert.o 00:03:09.233 CXX test/cpp_headers/barrier.o 00:03:09.233 CXX test/cpp_headers/base64.o 00:03:09.233 CXX test/cpp_headers/bdev.o 00:03:09.233 CXX test/cpp_headers/bdev_module.o 00:03:09.233 CXX test/cpp_headers/bit_pool.o 00:03:09.233 CXX test/cpp_headers/bdev_zone.o 00:03:09.233 CXX test/cpp_headers/bit_array.o 00:03:09.233 CXX test/cpp_headers/blob_bdev.o 00:03:09.233 CXX test/cpp_headers/blobfs_bdev.o 00:03:09.233 CXX test/cpp_headers/blobfs.o 00:03:09.233 CXX test/cpp_headers/blob.o 00:03:09.233 CXX test/cpp_headers/conf.o 00:03:09.233 CXX test/cpp_headers/config.o 00:03:09.233 CXX test/cpp_headers/cpuset.o 00:03:09.233 CXX test/cpp_headers/crc16.o 00:03:09.233 CXX test/cpp_headers/crc32.o 00:03:09.233 CXX test/cpp_headers/crc64.o 00:03:09.233 CXX test/cpp_headers/dma.o 00:03:09.233 CXX test/cpp_headers/dif.o 00:03:09.233 CXX test/cpp_headers/endian.o 00:03:09.233 CXX test/cpp_headers/env_dpdk.o 00:03:09.233 CXX test/cpp_headers/env.o 00:03:09.233 CXX test/cpp_headers/event.o 00:03:09.233 CXX test/cpp_headers/fd_group.o 00:03:09.233 CXX test/cpp_headers/fd.o 00:03:09.233 CXX test/cpp_headers/file.o 00:03:09.233 CXX test/cpp_headers/fsdev.o 00:03:09.233 CXX test/cpp_headers/fsdev_module.o 00:03:09.233 CXX test/cpp_headers/ftl.o 00:03:09.233 CXX test/cpp_headers/fuse_dispatcher.o 00:03:09.233 CXX test/cpp_headers/gpt_spec.o 00:03:09.233 CXX test/cpp_headers/hexlify.o 00:03:09.233 CXX test/cpp_headers/histogram_data.o 00:03:09.233 CC app/spdk_dd/spdk_dd.o 00:03:09.233 CXX test/cpp_headers/idxd.o 00:03:09.233 CXX test/cpp_headers/idxd_spec.o 00:03:09.233 CXX test/cpp_headers/init.o 00:03:09.233 CXX test/cpp_headers/ioat.o 00:03:09.233 CC app/nvmf_tgt/nvmf_main.o 00:03:09.233 CXX test/cpp_headers/ioat_spec.o 00:03:09.233 CC test/thread/lock/spdk_lock.o 00:03:09.233 CC test/thread/poller_perf/poller_perf.o 00:03:09.233 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.233 CC test/env/pci/pci_ut.o 00:03:09.233 CC test/env/memory/memory_ut.o 00:03:09.233 CC test/env/vtophys/vtophys.o 00:03:09.233 CXX test/cpp_headers/iscsi_spec.o 00:03:09.233 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.233 CC examples/ioat/perf/perf.o 00:03:09.233 CC app/spdk_tgt/spdk_tgt.o 00:03:09.233 CC examples/util/zipf/zipf.o 00:03:09.233 CC examples/ioat/verify/verify.o 00:03:09.233 CC app/fio/nvme/fio_plugin.o 00:03:09.233 CC test/app/jsoncat/jsoncat.o 00:03:09.233 CC test/app/stub/stub.o 00:03:09.233 CC test/app/histogram_perf/histogram_perf.o 00:03:09.233 LINK spdk_lspci 00:03:09.233 CC test/dma/test_dma/test_dma.o 00:03:09.493 CC test/app/bdev_svc/bdev_svc.o 00:03:09.493 CC app/fio/bdev/fio_plugin.o 00:03:09.493 CC test/env/mem_callbacks/mem_callbacks.o 00:03:09.493 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:09.493 LINK rpc_client_test 00:03:09.493 LINK spdk_nvme_discover 00:03:09.493 CXX test/cpp_headers/json.o 00:03:09.493 LINK spdk_trace_record 00:03:09.493 CXX test/cpp_headers/jsonrpc.o 00:03:09.493 CXX test/cpp_headers/keyring.o 00:03:09.493 CXX test/cpp_headers/keyring_module.o 00:03:09.493 CXX test/cpp_headers/likely.o 00:03:09.493 CXX test/cpp_headers/log.o 00:03:09.493 CXX test/cpp_headers/lvol.o 00:03:09.493 LINK poller_perf 00:03:09.493 CXX test/cpp_headers/md5.o 00:03:09.493 CXX test/cpp_headers/memory.o 00:03:09.493 CXX test/cpp_headers/mmio.o 00:03:09.493 CXX test/cpp_headers/nbd.o 00:03:09.493 CXX test/cpp_headers/net.o 00:03:09.493 CXX test/cpp_headers/notify.o 00:03:09.493 CXX test/cpp_headers/nvme.o 00:03:09.493 CXX test/cpp_headers/nvme_intel.o 00:03:09.493 CXX test/cpp_headers/nvme_ocssd.o 00:03:09.493 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:09.493 CXX test/cpp_headers/nvme_spec.o 00:03:09.493 LINK vtophys 00:03:09.493 CXX test/cpp_headers/nvme_zns.o 00:03:09.493 CXX test/cpp_headers/nvmf_cmd.o 00:03:09.493 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:09.493 CXX test/cpp_headers/nvmf.o 00:03:09.493 LINK interrupt_tgt 00:03:09.493 CXX test/cpp_headers/nvmf_spec.o 00:03:09.493 CXX test/cpp_headers/nvmf_transport.o 00:03:09.493 LINK zipf 00:03:09.493 CXX test/cpp_headers/opal.o 00:03:09.493 CXX test/cpp_headers/opal_spec.o 00:03:09.493 CXX test/cpp_headers/pci_ids.o 00:03:09.493 LINK jsoncat 00:03:09.493 CXX test/cpp_headers/pipe.o 00:03:09.493 CXX test/cpp_headers/queue.o 00:03:09.493 CXX test/cpp_headers/reduce.o 00:03:09.493 CXX test/cpp_headers/rpc.o 00:03:09.493 CXX test/cpp_headers/scheduler.o 00:03:09.493 CXX test/cpp_headers/scsi.o 00:03:09.493 LINK env_dpdk_post_init 00:03:09.493 LINK histogram_perf 00:03:09.493 CXX test/cpp_headers/scsi_spec.o 00:03:09.493 CXX test/cpp_headers/sock.o 00:03:09.493 CXX test/cpp_headers/stdinc.o 00:03:09.493 LINK nvmf_tgt 00:03:09.493 CXX test/cpp_headers/string.o 00:03:09.493 CXX test/cpp_headers/thread.o 00:03:09.493 CXX test/cpp_headers/trace.o 00:03:09.493 LINK verify 00:03:09.493 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:09.493 LINK stub 00:03:09.493 LINK ioat_perf 00:03:09.493 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:09.493 LINK iscsi_tgt 00:03:09.751 LINK spdk_tgt 00:03:09.751 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:09.751 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:09.751 LINK bdev_svc 00:03:09.751 CXX test/cpp_headers/trace_parser.o 00:03:09.751 CXX test/cpp_headers/ublk.o 00:03:09.751 CXX test/cpp_headers/tree.o 00:03:09.751 CXX test/cpp_headers/util.o 00:03:09.751 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:09.751 CXX test/cpp_headers/uuid.o 00:03:09.751 LINK spdk_trace 00:03:09.751 CXX test/cpp_headers/version.o 00:03:09.751 CXX test/cpp_headers/vfio_user_pci.o 00:03:09.751 CXX test/cpp_headers/vfio_user_spec.o 00:03:09.751 CXX test/cpp_headers/vhost.o 00:03:09.751 CXX test/cpp_headers/vmd.o 00:03:09.751 CXX test/cpp_headers/xor.o 00:03:09.751 CXX test/cpp_headers/zipf.o 00:03:09.751 LINK pci_ut 00:03:09.751 LINK spdk_dd 00:03:10.010 LINK nvme_fuzz 00:03:10.010 LINK test_dma 00:03:10.010 LINK spdk_bdev 00:03:10.010 LINK spdk_nvme 00:03:10.010 LINK spdk_nvme_identify 00:03:10.010 LINK vhost_fuzz 00:03:10.010 LINK spdk_nvme_perf 00:03:10.010 LINK mem_callbacks 00:03:10.010 CC examples/idxd/perf/perf.o 00:03:10.010 CC examples/vmd/lsvmd/lsvmd.o 00:03:10.010 CC examples/vmd/led/led.o 00:03:10.010 CC examples/sock/hello_world/hello_sock.o 00:03:10.010 LINK llvm_vfio_fuzz 00:03:10.010 CC examples/thread/thread/thread_ex.o 00:03:10.267 LINK spdk_top 00:03:10.267 CC app/vhost/vhost.o 00:03:10.267 LINK lsvmd 00:03:10.267 LINK led 00:03:10.267 LINK llvm_nvme_fuzz 00:03:10.267 LINK hello_sock 00:03:10.267 LINK idxd_perf 00:03:10.267 LINK thread 00:03:10.267 LINK vhost 00:03:10.523 LINK memory_ut 00:03:10.523 LINK spdk_lock 00:03:10.780 LINK iscsi_fuzz 00:03:11.037 CC examples/nvme/hotplug/hotplug.o 00:03:11.037 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:11.037 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.037 CC examples/nvme/abort/abort.o 00:03:11.037 CC examples/nvme/hello_world/hello_world.o 00:03:11.037 CC examples/nvme/reconnect/reconnect.o 00:03:11.037 CC examples/nvme/arbitration/arbitration.o 00:03:11.037 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.295 LINK pmr_persistence 00:03:11.295 LINK hotplug 00:03:11.295 CC test/event/event_perf/event_perf.o 00:03:11.295 LINK cmb_copy 00:03:11.295 CC test/event/reactor/reactor.o 00:03:11.295 CC test/event/reactor_perf/reactor_perf.o 00:03:11.295 LINK hello_world 00:03:11.295 CC test/event/app_repeat/app_repeat.o 00:03:11.295 CC test/event/scheduler/scheduler.o 00:03:11.295 LINK reconnect 00:03:11.295 LINK abort 00:03:11.295 LINK arbitration 00:03:11.295 LINK event_perf 00:03:11.295 LINK nvme_manage 00:03:11.295 LINK reactor 00:03:11.295 LINK reactor_perf 00:03:11.295 LINK app_repeat 00:03:11.554 LINK scheduler 00:03:11.554 CC test/nvme/overhead/overhead.o 00:03:11.554 CC test/nvme/reserve/reserve.o 00:03:11.554 CC test/nvme/reset/reset.o 00:03:11.554 CC test/nvme/e2edp/nvme_dp.o 00:03:11.554 CC test/nvme/aer/aer.o 00:03:11.554 CC test/nvme/simple_copy/simple_copy.o 00:03:11.554 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.554 CC test/nvme/err_injection/err_injection.o 00:03:11.554 CC test/nvme/startup/startup.o 00:03:11.554 CC test/nvme/boot_partition/boot_partition.o 00:03:11.554 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.554 CC test/nvme/cuse/cuse.o 00:03:11.554 CC test/nvme/sgl/sgl.o 00:03:11.554 CC test/nvme/fdp/fdp.o 00:03:11.554 CC test/nvme/connect_stress/connect_stress.o 00:03:11.554 CC test/nvme/compliance/nvme_compliance.o 00:03:11.554 CC test/accel/dif/dif.o 00:03:11.554 CC test/blobfs/mkfs/mkfs.o 00:03:11.812 CC test/lvol/esnap/esnap.o 00:03:11.812 LINK startup 00:03:11.812 LINK boot_partition 00:03:11.812 LINK err_injection 00:03:11.812 LINK reserve 00:03:11.812 LINK fused_ordering 00:03:11.812 LINK doorbell_aers 00:03:11.812 LINK simple_copy 00:03:11.812 LINK reset 00:03:11.812 LINK aer 00:03:11.812 LINK overhead 00:03:11.812 LINK sgl 00:03:11.812 LINK mkfs 00:03:11.812 LINK nvme_dp 00:03:11.812 LINK fdp 00:03:11.812 LINK connect_stress 00:03:12.071 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:12.071 CC examples/accel/perf/accel_perf.o 00:03:12.071 CC examples/blob/cli/blobcli.o 00:03:12.071 CC examples/blob/hello_world/hello_blob.o 00:03:12.071 LINK nvme_compliance 00:03:12.071 LINK hello_blob 00:03:12.071 LINK dif 00:03:12.071 LINK hello_fsdev 00:03:12.329 LINK accel_perf 00:03:12.329 LINK blobcli 00:03:12.588 LINK cuse 00:03:13.155 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.155 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.414 LINK hello_bdev 00:03:13.672 LINK bdevperf 00:03:13.930 CC test/bdev/bdevio/bdevio.o 00:03:14.189 LINK bdevio 00:03:15.126 CC examples/nvmf/nvmf/nvmf.o 00:03:15.126 LINK esnap 00:03:15.384 LINK nvmf 00:03:16.764 00:03:16.764 real 0m47.358s 00:03:16.764 user 6m59.913s 00:03:16.764 sys 2m22.540s 00:03:16.764 22:10:35 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:16.764 22:10:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.764 ************************************ 00:03:16.764 END TEST make 00:03:16.764 ************************************ 00:03:16.764 22:10:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.764 22:10:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.764 22:10:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.764 22:10:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.764 22:10:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.764 22:10:36 -- pm/common@44 -- $ pid=2988580 00:03:16.764 22:10:36 -- pm/common@50 -- $ kill -TERM 2988580 00:03:16.764 22:10:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.764 22:10:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.764 22:10:36 -- pm/common@44 -- $ pid=2988582 00:03:16.764 22:10:36 -- pm/common@50 -- $ kill -TERM 2988582 00:03:16.764 22:10:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.764 22:10:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:16.764 22:10:36 -- pm/common@44 -- $ pid=2988583 00:03:16.764 22:10:36 -- pm/common@50 -- $ kill -TERM 2988583 00:03:16.764 22:10:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.764 22:10:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:16.764 22:10:36 -- pm/common@44 -- $ pid=2988607 00:03:16.764 22:10:36 -- pm/common@50 -- $ sudo -E kill -TERM 2988607 00:03:16.764 22:10:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:16.764 22:10:36 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:16.764 22:10:36 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:16.764 22:10:36 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:16.764 22:10:36 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:16.764 22:10:36 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:16.764 22:10:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.764 22:10:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.764 22:10:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.764 22:10:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.764 22:10:36 -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.764 22:10:36 -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.764 22:10:36 -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.764 22:10:36 -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.764 22:10:36 -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.764 22:10:36 -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.764 22:10:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.764 22:10:36 -- scripts/common.sh@344 -- # case "$op" in 00:03:16.764 22:10:36 -- scripts/common.sh@345 -- # : 1 00:03:16.764 22:10:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.764 22:10:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.764 22:10:36 -- scripts/common.sh@365 -- # decimal 1 00:03:16.764 22:10:36 -- scripts/common.sh@353 -- # local d=1 00:03:16.764 22:10:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.764 22:10:36 -- scripts/common.sh@355 -- # echo 1 00:03:16.764 22:10:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.764 22:10:36 -- scripts/common.sh@366 -- # decimal 2 00:03:16.764 22:10:36 -- scripts/common.sh@353 -- # local d=2 00:03:16.764 22:10:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.764 22:10:36 -- scripts/common.sh@355 -- # echo 2 00:03:16.764 22:10:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.764 22:10:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.764 22:10:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.764 22:10:36 -- scripts/common.sh@368 -- # return 0 00:03:16.764 22:10:36 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.764 22:10:36 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:16.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.764 --rc genhtml_branch_coverage=1 00:03:16.764 --rc genhtml_function_coverage=1 00:03:16.764 --rc genhtml_legend=1 00:03:16.764 --rc geninfo_all_blocks=1 00:03:16.764 --rc geninfo_unexecuted_blocks=1 00:03:16.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.764 ' 00:03:16.764 22:10:36 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:16.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.764 --rc genhtml_branch_coverage=1 00:03:16.764 --rc genhtml_function_coverage=1 00:03:16.764 --rc genhtml_legend=1 00:03:16.764 --rc geninfo_all_blocks=1 00:03:16.764 --rc geninfo_unexecuted_blocks=1 00:03:16.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.764 ' 00:03:16.764 22:10:36 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:16.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.764 --rc genhtml_branch_coverage=1 00:03:16.764 --rc genhtml_function_coverage=1 00:03:16.764 --rc genhtml_legend=1 00:03:16.764 --rc geninfo_all_blocks=1 00:03:16.764 --rc geninfo_unexecuted_blocks=1 00:03:16.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.764 ' 00:03:16.764 22:10:36 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:16.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.764 --rc genhtml_branch_coverage=1 00:03:16.764 --rc genhtml_function_coverage=1 00:03:16.764 --rc genhtml_legend=1 00:03:16.764 --rc geninfo_all_blocks=1 00:03:16.764 --rc geninfo_unexecuted_blocks=1 00:03:16.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.764 ' 00:03:16.764 22:10:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.764 22:10:36 -- nvmf/common.sh@7 -- # uname -s 00:03:16.764 22:10:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.764 22:10:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.764 22:10:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.764 22:10:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.764 22:10:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.764 22:10:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.764 22:10:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.764 22:10:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.764 22:10:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.764 22:10:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.765 22:10:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:03:16.765 22:10:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:03:16.765 22:10:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.765 22:10:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.765 22:10:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:16.765 22:10:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.765 22:10:36 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:16.765 22:10:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.765 22:10:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.765 22:10:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.765 22:10:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.765 22:10:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.765 22:10:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.765 22:10:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.765 22:10:36 -- paths/export.sh@5 -- # export PATH 00:03:16.765 22:10:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.765 22:10:36 -- nvmf/common.sh@51 -- # : 0 00:03:16.765 22:10:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.765 22:10:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.765 22:10:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.765 22:10:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.765 22:10:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.765 22:10:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.765 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.765 22:10:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.765 22:10:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.765 22:10:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.765 22:10:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.765 22:10:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.765 22:10:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.765 22:10:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.765 22:10:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:16.765 22:10:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.765 22:10:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:16.765 22:10:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:17.024 22:10:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:17.024 22:10:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:17.024 22:10:36 -- spdk/autotest.sh@48 -- # udevadm_pid=3048221 00:03:17.024 22:10:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:17.024 22:10:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:17.024 22:10:36 -- pm/common@17 -- # local monitor 00:03:17.024 22:10:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.024 22:10:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.024 22:10:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.024 22:10:36 -- pm/common@21 -- # date +%s 00:03:17.024 22:10:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.024 22:10:36 -- pm/common@21 -- # date +%s 00:03:17.024 22:10:36 -- pm/common@25 -- # sleep 1 00:03:17.024 22:10:36 -- pm/common@21 -- # date +%s 00:03:17.024 22:10:36 -- pm/common@21 -- # date +%s 00:03:17.024 22:10:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730236236 00:03:17.024 22:10:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730236236 00:03:17.024 22:10:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730236236 00:03:17.024 22:10:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730236236 00:03:17.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730236236_collect-cpu-load.pm.log 00:03:17.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730236236_collect-vmstat.pm.log 00:03:17.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730236236_collect-cpu-temp.pm.log 00:03:17.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730236236_collect-bmc-pm.bmc.pm.log 00:03:17.963 22:10:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.963 22:10:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.963 22:10:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.963 22:10:37 -- common/autotest_common.sh@10 -- # set +x 00:03:17.963 22:10:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.963 22:10:37 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:17.963 22:10:37 -- common/autotest_common.sh@10 -- # set +x 00:03:17.963 22:10:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:17.963 22:10:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:17.963 22:10:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:17.963 22:10:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:17.963 22:10:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:17.963 22:10:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.963 22:10:37 -- common/autotest_common.sh@1455 -- # uname 00:03:17.963 22:10:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.963 22:10:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.963 22:10:37 -- common/autotest_common.sh@1475 -- # uname 00:03:17.963 22:10:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.963 22:10:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.963 22:10:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:17.963 lcov: LCOV version 1.15 00:03:17.963 22:10:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:23.293 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:28.576 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.878 22:10:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:33.878 22:10:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.878 22:10:53 -- common/autotest_common.sh@10 -- # set +x 00:03:33.878 22:10:53 -- spdk/autotest.sh@78 -- # rm -f 00:03:33.878 22:10:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.171 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:37.171 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:37.171 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:37.430 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:37.430 22:10:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:37.430 22:10:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:37.430 22:10:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:37.430 22:10:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:37.430 22:10:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:37.430 22:10:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:37.430 22:10:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:37.430 22:10:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.430 22:10:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:37.430 22:10:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:37.430 22:10:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.430 22:10:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:37.430 22:10:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:37.430 22:10:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:37.430 22:10:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:37.689 No valid GPT data, bailing 00:03:37.689 22:10:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.689 22:10:56 -- scripts/common.sh@394 -- # pt= 00:03:37.689 22:10:56 -- scripts/common.sh@395 -- # return 1 00:03:37.689 22:10:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:37.689 1+0 records in 00:03:37.689 1+0 records out 00:03:37.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00208617 s, 503 MB/s 00:03:37.690 22:10:56 -- spdk/autotest.sh@105 -- # sync 00:03:37.690 22:10:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:37.690 22:10:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:37.690 22:10:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:42.964 22:11:02 -- spdk/autotest.sh@111 -- # uname -s 00:03:42.964 22:11:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:42.964 22:11:02 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:42.964 22:11:02 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:42.964 22:11:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.964 22:11:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.964 22:11:02 -- common/autotest_common.sh@10 -- # set +x 00:03:42.964 ************************************ 00:03:42.964 START TEST setup.sh 00:03:42.964 ************************************ 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:42.964 * Looking for test storage... 00:03:42.964 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.964 22:11:02 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:42.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.964 --rc genhtml_branch_coverage=1 00:03:42.964 --rc genhtml_function_coverage=1 00:03:42.964 --rc genhtml_legend=1 00:03:42.964 --rc geninfo_all_blocks=1 00:03:42.964 --rc geninfo_unexecuted_blocks=1 00:03:42.964 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:42.964 ' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:42.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.964 --rc genhtml_branch_coverage=1 00:03:42.964 --rc genhtml_function_coverage=1 00:03:42.964 --rc genhtml_legend=1 00:03:42.964 --rc geninfo_all_blocks=1 00:03:42.964 --rc geninfo_unexecuted_blocks=1 00:03:42.964 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:42.964 ' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:42.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.964 --rc genhtml_branch_coverage=1 00:03:42.964 --rc genhtml_function_coverage=1 00:03:42.964 --rc genhtml_legend=1 00:03:42.964 --rc geninfo_all_blocks=1 00:03:42.964 --rc geninfo_unexecuted_blocks=1 00:03:42.964 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:42.964 ' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:42.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.964 --rc genhtml_branch_coverage=1 00:03:42.964 --rc genhtml_function_coverage=1 00:03:42.964 --rc genhtml_legend=1 00:03:42.964 --rc geninfo_all_blocks=1 00:03:42.964 --rc geninfo_unexecuted_blocks=1 00:03:42.964 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:42.964 ' 00:03:42.964 22:11:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:42.964 22:11:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:42.964 22:11:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.964 22:11:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.964 ************************************ 00:03:42.964 START TEST acl 00:03:42.964 ************************************ 00:03:42.964 22:11:02 setup.sh.acl -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:43.224 * Looking for test storage... 00:03:43.224 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.224 22:11:02 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:43.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.224 --rc genhtml_branch_coverage=1 00:03:43.224 --rc genhtml_function_coverage=1 00:03:43.224 --rc genhtml_legend=1 00:03:43.224 --rc geninfo_all_blocks=1 00:03:43.224 --rc geninfo_unexecuted_blocks=1 00:03:43.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:43.224 ' 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:43.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.224 --rc genhtml_branch_coverage=1 00:03:43.224 --rc genhtml_function_coverage=1 00:03:43.224 --rc genhtml_legend=1 00:03:43.224 --rc geninfo_all_blocks=1 00:03:43.224 --rc geninfo_unexecuted_blocks=1 00:03:43.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:43.224 ' 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:43.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.224 --rc genhtml_branch_coverage=1 00:03:43.224 --rc genhtml_function_coverage=1 00:03:43.224 --rc genhtml_legend=1 00:03:43.224 --rc geninfo_all_blocks=1 00:03:43.224 --rc geninfo_unexecuted_blocks=1 00:03:43.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:43.224 ' 00:03:43.224 22:11:02 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:43.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.224 --rc genhtml_branch_coverage=1 00:03:43.224 --rc genhtml_function_coverage=1 00:03:43.224 --rc genhtml_legend=1 00:03:43.224 --rc geninfo_all_blocks=1 00:03:43.224 --rc geninfo_unexecuted_blocks=1 00:03:43.225 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:43.225 ' 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.225 22:11:02 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.225 22:11:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.225 22:11:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.225 22:11:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.418 22:11:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.418 22:11:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.418 22:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.418 22:11:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.418 22:11:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.418 22:11:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:49.957 Hugepages 00:03:49.957 node hugesize free / total 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.957 00:03:49.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.957 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.958 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:50.218 22:11:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.218 22:11:09 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.218 22:11:09 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.218 22:11:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.218 ************************************ 00:03:50.218 START TEST denied 00:03:50.218 ************************************ 00:03:50.218 22:11:09 setup.sh.acl.denied -- common/autotest_common.sh@1127 -- # denied 00:03:50.218 22:11:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:50.218 22:11:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:50.218 22:11:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:50.218 22:11:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.218 22:11:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.416 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.416 22:11:13 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.617 00:03:58.617 real 0m8.022s 00:03:58.617 user 0m2.550s 00:03:58.617 sys 0m4.739s 00:03:58.617 22:11:17 setup.sh.acl.denied -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.617 22:11:17 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.617 ************************************ 00:03:58.617 END TEST denied 00:03:58.617 ************************************ 00:03:58.617 22:11:17 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.617 22:11:17 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.617 22:11:17 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.617 22:11:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.617 ************************************ 00:03:58.617 START TEST allowed 00:03:58.617 ************************************ 00:03:58.617 22:11:17 setup.sh.acl.allowed -- common/autotest_common.sh@1127 -- # allowed 00:03:58.617 22:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.617 22:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.617 22:11:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:58.617 22:11:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.617 22:11:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:05.185 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.185 22:11:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:05.185 22:11:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:05.185 22:11:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:05.185 22:11:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.185 22:11:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.473 00:04:08.473 real 0m10.281s 00:04:08.473 user 0m2.504s 00:04:08.473 sys 0m4.666s 00:04:08.473 22:11:27 setup.sh.acl.allowed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.473 22:11:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:08.473 ************************************ 00:04:08.473 END TEST allowed 00:04:08.473 ************************************ 00:04:08.473 00:04:08.473 real 0m25.604s 00:04:08.473 user 0m7.745s 00:04:08.473 sys 0m14.298s 00:04:08.473 22:11:27 setup.sh.acl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.473 22:11:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.473 ************************************ 00:04:08.473 END TEST acl 00:04:08.473 ************************************ 00:04:08.732 22:11:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:08.732 22:11:28 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.732 22:11:28 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.732 22:11:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.732 ************************************ 00:04:08.732 START TEST hugepages 00:04:08.732 ************************************ 00:04:08.732 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:08.732 * Looking for test storage... 00:04:08.732 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:08.732 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.732 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.732 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.993 22:11:28 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.993 --rc genhtml_branch_coverage=1 00:04:08.993 --rc genhtml_function_coverage=1 00:04:08.993 --rc genhtml_legend=1 00:04:08.993 --rc geninfo_all_blocks=1 00:04:08.993 --rc geninfo_unexecuted_blocks=1 00:04:08.993 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.993 ' 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.993 --rc genhtml_branch_coverage=1 00:04:08.993 --rc genhtml_function_coverage=1 00:04:08.993 --rc genhtml_legend=1 00:04:08.993 --rc geninfo_all_blocks=1 00:04:08.993 --rc geninfo_unexecuted_blocks=1 00:04:08.993 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.993 ' 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.993 --rc genhtml_branch_coverage=1 00:04:08.993 --rc genhtml_function_coverage=1 00:04:08.993 --rc genhtml_legend=1 00:04:08.993 --rc geninfo_all_blocks=1 00:04:08.993 --rc geninfo_unexecuted_blocks=1 00:04:08.993 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.993 ' 00:04:08.993 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.993 --rc genhtml_branch_coverage=1 00:04:08.993 --rc genhtml_function_coverage=1 00:04:08.993 --rc genhtml_legend=1 00:04:08.993 --rc geninfo_all_blocks=1 00:04:08.993 --rc geninfo_unexecuted_blocks=1 00:04:08.993 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.993 ' 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 73254588 kB' 'MemAvailable: 77399148 kB' 'Buffers: 9772 kB' 'Cached: 12145756 kB' 'SwapCached: 0 kB' 'Active: 8751872 kB' 'Inactive: 4032888 kB' 'Active(anon): 8266200 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632780 kB' 'Mapped: 157112 kB' 'Shmem: 7636968 kB' 'KReclaimable: 461936 kB' 'Slab: 966832 kB' 'SReclaimable: 461936 kB' 'SUnreclaim: 504896 kB' 'KernelStack: 16016 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434204 kB' 'Committed_AS: 9578708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198848 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.993 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.994 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:08.995 22:11:28 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:08.995 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.995 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.995 22:11:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.995 ************************************ 00:04:08.995 START TEST single_node_setup 00:04:08.995 ************************************ 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1127 -- # single_node_setup 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.995 22:11:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.289 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.289 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.587 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75463664 kB' 'MemAvailable: 79608256 kB' 'Buffers: 9772 kB' 'Cached: 12145892 kB' 'SwapCached: 0 kB' 'Active: 8754052 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268380 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634624 kB' 'Mapped: 156180 kB' 'Shmem: 7637104 kB' 'KReclaimable: 461968 kB' 'Slab: 965736 kB' 'SReclaimable: 461968 kB' 'SUnreclaim: 503768 kB' 'KernelStack: 16176 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9580816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198864 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.587 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.588 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75465368 kB' 'MemAvailable: 79609864 kB' 'Buffers: 9772 kB' 'Cached: 12145896 kB' 'SwapCached: 0 kB' 'Active: 8754208 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268536 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634756 kB' 'Mapped: 156100 kB' 'Shmem: 7637108 kB' 'KReclaimable: 461872 kB' 'Slab: 965624 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503752 kB' 'KernelStack: 16240 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9581848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198960 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.589 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75466044 kB' 'MemAvailable: 79610540 kB' 'Buffers: 9772 kB' 'Cached: 12145912 kB' 'SwapCached: 0 kB' 'Active: 8753804 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268132 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634876 kB' 'Mapped: 156092 kB' 'Shmem: 7637124 kB' 'KReclaimable: 461872 kB' 'Slab: 965600 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503728 kB' 'KernelStack: 16128 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9581868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198832 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:15.592 nr_hugepages=1024 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:15.592 resv_hugepages=0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:15.592 surplus_hugepages=0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:15.592 anon_hugepages=0 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75465728 kB' 'MemAvailable: 79610224 kB' 'Buffers: 9772 kB' 'Cached: 12145912 kB' 'SwapCached: 0 kB' 'Active: 8754228 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268556 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634784 kB' 'Mapped: 156092 kB' 'Shmem: 7637124 kB' 'KReclaimable: 461872 kB' 'Slab: 965600 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503728 kB' 'KernelStack: 16096 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9580520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198848 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.593 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.594 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.855 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 41077192 kB' 'MemUsed: 6987744 kB' 'SwapCached: 0 kB' 'Active: 3594676 kB' 'Inactive: 305328 kB' 'Active(anon): 3268572 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3415828 kB' 'Mapped: 61668 kB' 'AnonPages: 487412 kB' 'Shmem: 2784396 kB' 'KernelStack: 9480 kB' 'PageTables: 5260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235788 kB' 'Slab: 519680 kB' 'SReclaimable: 235788 kB' 'SUnreclaim: 283892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.856 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:15.857 node0=1024 expecting 1024 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.857 00:04:15.857 real 0m6.776s 00:04:15.857 user 0m1.409s 00:04:15.857 sys 0m2.321s 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.857 22:11:35 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:15.857 ************************************ 00:04:15.857 END TEST single_node_setup 00:04:15.857 ************************************ 00:04:15.857 22:11:35 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:15.857 22:11:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.857 22:11:35 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.857 22:11:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.857 ************************************ 00:04:15.857 START TEST even_2G_alloc 00:04:15.857 ************************************ 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1127 -- # even_2G_alloc 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.857 22:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:19.160 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.160 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.160 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75512788 kB' 'MemAvailable: 79657284 kB' 'Buffers: 9772 kB' 'Cached: 12146044 kB' 'SwapCached: 0 kB' 'Active: 8752928 kB' 'Inactive: 4032888 kB' 'Active(anon): 8267256 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633244 kB' 'Mapped: 155568 kB' 'Shmem: 7637256 kB' 'KReclaimable: 461872 kB' 'Slab: 965380 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503508 kB' 'KernelStack: 15904 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9573068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198832 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75512908 kB' 'MemAvailable: 79657404 kB' 'Buffers: 9772 kB' 'Cached: 12146048 kB' 'SwapCached: 0 kB' 'Active: 8752828 kB' 'Inactive: 4032888 kB' 'Active(anon): 8267156 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633120 kB' 'Mapped: 155548 kB' 'Shmem: 7637260 kB' 'KReclaimable: 461872 kB' 'Slab: 965380 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503508 kB' 'KernelStack: 15920 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9573088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.163 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75511524 kB' 'MemAvailable: 79656020 kB' 'Buffers: 9772 kB' 'Cached: 12146064 kB' 'SwapCached: 0 kB' 'Active: 8752816 kB' 'Inactive: 4032888 kB' 'Active(anon): 8267144 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633136 kB' 'Mapped: 155600 kB' 'Shmem: 7637276 kB' 'KReclaimable: 461872 kB' 'Slab: 965448 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503576 kB' 'KernelStack: 15936 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9573108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.164 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:19.165 nr_hugepages=1024 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:19.165 resv_hugepages=0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:19.165 surplus_hugepages=0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:19.165 anon_hugepages=0 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.165 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75511788 kB' 'MemAvailable: 79656284 kB' 'Buffers: 9772 kB' 'Cached: 12146084 kB' 'SwapCached: 0 kB' 'Active: 8752844 kB' 'Inactive: 4032888 kB' 'Active(anon): 8267172 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633132 kB' 'Mapped: 155600 kB' 'Shmem: 7637296 kB' 'KReclaimable: 461872 kB' 'Slab: 965448 kB' 'SReclaimable: 461872 kB' 'SUnreclaim: 503576 kB' 'KernelStack: 15936 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9573128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.166 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 42157640 kB' 'MemUsed: 5907296 kB' 'SwapCached: 0 kB' 'Active: 3595436 kB' 'Inactive: 305328 kB' 'Active(anon): 3269332 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3415940 kB' 'Mapped: 61164 kB' 'AnonPages: 488040 kB' 'Shmem: 2784508 kB' 'KernelStack: 9480 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235788 kB' 'Slab: 519716 kB' 'SReclaimable: 235788 kB' 'SUnreclaim: 283928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.167 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.168 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220572 kB' 'MemFree: 33353900 kB' 'MemUsed: 10866672 kB' 'SwapCached: 0 kB' 'Active: 5157724 kB' 'Inactive: 3727560 kB' 'Active(anon): 4998156 kB' 'Inactive(anon): 0 kB' 'Active(file): 159568 kB' 'Inactive(file): 3727560 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8739936 kB' 'Mapped: 94436 kB' 'AnonPages: 145404 kB' 'Shmem: 4852808 kB' 'KernelStack: 6456 kB' 'PageTables: 2688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 226084 kB' 'Slab: 445700 kB' 'SReclaimable: 226084 kB' 'SUnreclaim: 219616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.169 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:19.170 node0=512 expecting 512 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:19.170 node1=512 expecting 512 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:19.170 00:04:19.170 real 0m3.423s 00:04:19.170 user 0m1.324s 00:04:19.170 sys 0m2.186s 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.170 22:11:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.170 ************************************ 00:04:19.170 END TEST even_2G_alloc 00:04:19.170 ************************************ 00:04:19.529 22:11:38 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:19.529 22:11:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.529 22:11:38 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.529 22:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.529 ************************************ 00:04:19.529 START TEST odd_alloc 00:04:19.529 ************************************ 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1127 -- # odd_alloc 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.529 22:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.002 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.002 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.002 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:23.002 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75505640 kB' 'MemAvailable: 79650072 kB' 'Buffers: 9772 kB' 'Cached: 12146196 kB' 'SwapCached: 0 kB' 'Active: 8754304 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268632 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634428 kB' 'Mapped: 155628 kB' 'Shmem: 7637408 kB' 'KReclaimable: 461808 kB' 'Slab: 965360 kB' 'SReclaimable: 461808 kB' 'SUnreclaim: 503552 kB' 'KernelStack: 16000 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9573612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75507588 kB' 'MemAvailable: 79652020 kB' 'Buffers: 9772 kB' 'Cached: 12146204 kB' 'SwapCached: 0 kB' 'Active: 8753732 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268060 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633904 kB' 'Mapped: 155620 kB' 'Shmem: 7637416 kB' 'KReclaimable: 461808 kB' 'Slab: 965268 kB' 'SReclaimable: 461808 kB' 'SUnreclaim: 503460 kB' 'KernelStack: 15920 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9573628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198928 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75507588 kB' 'MemAvailable: 79652020 kB' 'Buffers: 9772 kB' 'Cached: 12146220 kB' 'SwapCached: 0 kB' 'Active: 8753756 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268084 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633904 kB' 'Mapped: 155620 kB' 'Shmem: 7637432 kB' 'KReclaimable: 461808 kB' 'Slab: 965268 kB' 'SReclaimable: 461808 kB' 'SUnreclaim: 503460 kB' 'KernelStack: 15920 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9573648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198928 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.005 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:23.007 nr_hugepages=1025 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:23.007 resv_hugepages=0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:23.007 surplus_hugepages=0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:23.007 anon_hugepages=0 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75507268 kB' 'MemAvailable: 79651700 kB' 'Buffers: 9772 kB' 'Cached: 12146240 kB' 'SwapCached: 0 kB' 'Active: 8753656 kB' 'Inactive: 4032888 kB' 'Active(anon): 8267984 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633784 kB' 'Mapped: 155620 kB' 'Shmem: 7637452 kB' 'KReclaimable: 461808 kB' 'Slab: 965268 kB' 'SReclaimable: 461808 kB' 'SUnreclaim: 503460 kB' 'KernelStack: 15904 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9574632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198912 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.007 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 42145112 kB' 'MemUsed: 5919824 kB' 'SwapCached: 0 kB' 'Active: 3593872 kB' 'Inactive: 305328 kB' 'Active(anon): 3267768 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3416004 kB' 'Mapped: 61172 kB' 'AnonPages: 486312 kB' 'Shmem: 2784572 kB' 'KernelStack: 9416 kB' 'PageTables: 5116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235788 kB' 'Slab: 519836 kB' 'SReclaimable: 235788 kB' 'SUnreclaim: 284048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.008 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220572 kB' 'MemFree: 33356872 kB' 'MemUsed: 10863700 kB' 'SwapCached: 0 kB' 'Active: 5165896 kB' 'Inactive: 3727560 kB' 'Active(anon): 5006328 kB' 'Inactive(anon): 0 kB' 'Active(file): 159568 kB' 'Inactive(file): 3727560 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8740044 kB' 'Mapped: 95300 kB' 'AnonPages: 153604 kB' 'Shmem: 4852916 kB' 'KernelStack: 6504 kB' 'PageTables: 2820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 226020 kB' 'Slab: 445432 kB' 'SReclaimable: 226020 kB' 'SUnreclaim: 219412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.009 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:23.010 node0=513 expecting 513 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:23.010 node1=512 expecting 512 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:23.010 00:04:23.010 real 0m3.450s 00:04:23.010 user 0m1.368s 00:04:23.010 sys 0m2.166s 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.010 22:11:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.010 ************************************ 00:04:23.010 END TEST odd_alloc 00:04:23.010 ************************************ 00:04:23.010 22:11:42 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:23.010 22:11:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.010 22:11:42 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.010 22:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.010 ************************************ 00:04:23.010 START TEST custom_alloc 00:04:23.010 ************************************ 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1127 -- # custom_alloc 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:23.010 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.011 22:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:26.308 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.308 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.308 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.308 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 74466084 kB' 'MemAvailable: 78610500 kB' 'Buffers: 9772 kB' 'Cached: 12146352 kB' 'SwapCached: 0 kB' 'Active: 8754592 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268920 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634552 kB' 'Mapped: 155652 kB' 'Shmem: 7637564 kB' 'KReclaimable: 461792 kB' 'Slab: 965140 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503348 kB' 'KernelStack: 15920 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9574020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.309 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 74467668 kB' 'MemAvailable: 78612084 kB' 'Buffers: 9772 kB' 'Cached: 12146356 kB' 'SwapCached: 0 kB' 'Active: 8754268 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268596 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634272 kB' 'Mapped: 155640 kB' 'Shmem: 7637568 kB' 'KReclaimable: 461792 kB' 'Slab: 965216 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503424 kB' 'KernelStack: 15920 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9574036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198896 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.310 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.311 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 74466912 kB' 'MemAvailable: 78611328 kB' 'Buffers: 9772 kB' 'Cached: 12146372 kB' 'SwapCached: 0 kB' 'Active: 8754244 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268572 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634136 kB' 'Mapped: 155640 kB' 'Shmem: 7637584 kB' 'KReclaimable: 461792 kB' 'Slab: 965216 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503424 kB' 'KernelStack: 15904 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9574200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198912 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.312 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.313 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:26.314 nr_hugepages=1536 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:26.314 resv_hugepages=0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:26.314 surplus_hugepages=0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:26.314 anon_hugepages=0 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 74467392 kB' 'MemAvailable: 78611808 kB' 'Buffers: 9772 kB' 'Cached: 12146392 kB' 'SwapCached: 0 kB' 'Active: 8754596 kB' 'Inactive: 4032888 kB' 'Active(anon): 8268924 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634540 kB' 'Mapped: 155640 kB' 'Shmem: 7637604 kB' 'KReclaimable: 461792 kB' 'Slab: 965216 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503424 kB' 'KernelStack: 15920 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9574080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198864 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.314 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:26.315 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 42162548 kB' 'MemUsed: 5902388 kB' 'SwapCached: 0 kB' 'Active: 3594308 kB' 'Inactive: 305328 kB' 'Active(anon): 3268204 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3416020 kB' 'Mapped: 61168 kB' 'AnonPages: 486732 kB' 'Shmem: 2784588 kB' 'KernelStack: 9432 kB' 'PageTables: 5212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235772 kB' 'Slab: 519716 kB' 'SReclaimable: 235772 kB' 'SUnreclaim: 283944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.316 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220572 kB' 'MemFree: 32306916 kB' 'MemUsed: 11913656 kB' 'SwapCached: 0 kB' 'Active: 5160524 kB' 'Inactive: 3727560 kB' 'Active(anon): 5000956 kB' 'Inactive(anon): 0 kB' 'Active(file): 159568 kB' 'Inactive(file): 3727560 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8740188 kB' 'Mapped: 94472 kB' 'AnonPages: 147936 kB' 'Shmem: 4853060 kB' 'KernelStack: 6472 kB' 'PageTables: 2660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 226020 kB' 'Slab: 445468 kB' 'SReclaimable: 226020 kB' 'SUnreclaim: 219448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.317 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:26.318 node0=512 expecting 512 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:26.318 node1=1024 expecting 1024 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:26.318 00:04:26.318 real 0m3.475s 00:04:26.318 user 0m1.351s 00:04:26.318 sys 0m2.187s 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.318 22:11:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.318 ************************************ 00:04:26.318 END TEST custom_alloc 00:04:26.318 ************************************ 00:04:26.318 22:11:45 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:26.318 22:11:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.318 22:11:45 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.318 22:11:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.578 ************************************ 00:04:26.578 START TEST no_shrink_alloc 00:04:26.578 ************************************ 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1127 -- # no_shrink_alloc 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:26.578 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.579 22:11:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:29.880 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.880 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.880 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75526488 kB' 'MemAvailable: 79670904 kB' 'Buffers: 9772 kB' 'Cached: 12146504 kB' 'SwapCached: 0 kB' 'Active: 8755216 kB' 'Inactive: 4032888 kB' 'Active(anon): 8269544 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635088 kB' 'Mapped: 155736 kB' 'Shmem: 7637716 kB' 'KReclaimable: 461792 kB' 'Slab: 965392 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503600 kB' 'KernelStack: 15952 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9574564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.880 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75526916 kB' 'MemAvailable: 79671332 kB' 'Buffers: 9772 kB' 'Cached: 12146508 kB' 'SwapCached: 0 kB' 'Active: 8755052 kB' 'Inactive: 4032888 kB' 'Active(anon): 8269380 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634948 kB' 'Mapped: 155656 kB' 'Shmem: 7637720 kB' 'KReclaimable: 461792 kB' 'Slab: 965340 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503548 kB' 'KernelStack: 15936 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9574580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.881 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.882 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75526996 kB' 'MemAvailable: 79671412 kB' 'Buffers: 9772 kB' 'Cached: 12146528 kB' 'SwapCached: 0 kB' 'Active: 8754788 kB' 'Inactive: 4032888 kB' 'Active(anon): 8269116 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634620 kB' 'Mapped: 155656 kB' 'Shmem: 7637740 kB' 'KReclaimable: 461792 kB' 'Slab: 965340 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503548 kB' 'KernelStack: 15936 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9574604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.883 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.884 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:29.885 nr_hugepages=1024 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:29.885 resv_hugepages=0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:29.885 surplus_hugepages=0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:29.885 anon_hugepages=0 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75526996 kB' 'MemAvailable: 79671412 kB' 'Buffers: 9772 kB' 'Cached: 12146528 kB' 'SwapCached: 0 kB' 'Active: 8754824 kB' 'Inactive: 4032888 kB' 'Active(anon): 8269152 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634652 kB' 'Mapped: 155656 kB' 'Shmem: 7637740 kB' 'KReclaimable: 461792 kB' 'Slab: 965340 kB' 'SReclaimable: 461792 kB' 'SUnreclaim: 503548 kB' 'KernelStack: 15952 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9574624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.885 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.886 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 41115056 kB' 'MemUsed: 6949880 kB' 'SwapCached: 0 kB' 'Active: 3594200 kB' 'Inactive: 305328 kB' 'Active(anon): 3268096 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3416116 kB' 'Mapped: 61164 kB' 'AnonPages: 486548 kB' 'Shmem: 2784684 kB' 'KernelStack: 9448 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235772 kB' 'Slab: 519756 kB' 'SReclaimable: 235772 kB' 'SUnreclaim: 283984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.887 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:29.888 node0=1024 expecting 1024 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:29.888 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.889 22:11:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:33.221 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:33.221 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:33.221 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:33.221 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75546248 kB' 'MemAvailable: 79690656 kB' 'Buffers: 9772 kB' 'Cached: 12146644 kB' 'SwapCached: 0 kB' 'Active: 8756140 kB' 'Inactive: 4032888 kB' 'Active(anon): 8270468 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635896 kB' 'Mapped: 155760 kB' 'Shmem: 7637856 kB' 'KReclaimable: 461784 kB' 'Slab: 965388 kB' 'SReclaimable: 461784 kB' 'SUnreclaim: 503604 kB' 'KernelStack: 15936 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9575088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.221 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.222 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75546764 kB' 'MemAvailable: 79691156 kB' 'Buffers: 9772 kB' 'Cached: 12146644 kB' 'SwapCached: 0 kB' 'Active: 8756232 kB' 'Inactive: 4032888 kB' 'Active(anon): 8270560 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636004 kB' 'Mapped: 155664 kB' 'Shmem: 7637856 kB' 'KReclaimable: 461768 kB' 'Slab: 965348 kB' 'SReclaimable: 461768 kB' 'SUnreclaim: 503580 kB' 'KernelStack: 15952 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9576116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198928 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.223 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75546804 kB' 'MemAvailable: 79691196 kB' 'Buffers: 9772 kB' 'Cached: 12146660 kB' 'SwapCached: 0 kB' 'Active: 8756336 kB' 'Inactive: 4032888 kB' 'Active(anon): 8270664 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636128 kB' 'Mapped: 155664 kB' 'Shmem: 7637872 kB' 'KReclaimable: 461768 kB' 'Slab: 965356 kB' 'SReclaimable: 461768 kB' 'SUnreclaim: 503588 kB' 'KernelStack: 15920 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9577588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198880 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.224 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.225 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:33.226 nr_hugepages=1024 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:33.226 resv_hugepages=0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:33.226 surplus_hugepages=0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:33.226 anon_hugepages=0 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285508 kB' 'MemFree: 75553336 kB' 'MemAvailable: 79697728 kB' 'Buffers: 9772 kB' 'Cached: 12146684 kB' 'SwapCached: 0 kB' 'Active: 8756556 kB' 'Inactive: 4032888 kB' 'Active(anon): 8270884 kB' 'Inactive(anon): 0 kB' 'Active(file): 485672 kB' 'Inactive(file): 4032888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635796 kB' 'Mapped: 155660 kB' 'Shmem: 7637896 kB' 'KReclaimable: 461768 kB' 'Slab: 965356 kB' 'SReclaimable: 461768 kB' 'SUnreclaim: 503588 kB' 'KernelStack: 15968 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9577776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 61056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 402944 kB' 'DirectMap2M: 8710144 kB' 'DirectMap1G: 93323264 kB' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.226 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.227 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 41126020 kB' 'MemUsed: 6938916 kB' 'SwapCached: 0 kB' 'Active: 3596048 kB' 'Inactive: 305328 kB' 'Active(anon): 3269944 kB' 'Inactive(anon): 0 kB' 'Active(file): 326104 kB' 'Inactive(file): 305328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3416204 kB' 'Mapped: 61164 kB' 'AnonPages: 488488 kB' 'Shmem: 2784772 kB' 'KernelStack: 9496 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235748 kB' 'Slab: 519800 kB' 'SReclaimable: 235748 kB' 'SUnreclaim: 284052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.228 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.229 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:33.230 node0=1024 expecting 1024 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.230 00:04:33.230 real 0m6.786s 00:04:33.230 user 0m2.654s 00:04:33.230 sys 0m4.306s 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.230 22:11:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.230 ************************************ 00:04:33.230 END TEST no_shrink_alloc 00:04:33.230 ************************************ 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:33.230 22:11:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:33.230 00:04:33.230 real 0m24.586s 00:04:33.230 user 0m8.393s 00:04:33.230 sys 0m13.608s 00:04:33.230 22:11:52 setup.sh.hugepages -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.230 22:11:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.230 ************************************ 00:04:33.230 END TEST hugepages 00:04:33.230 ************************************ 00:04:33.489 22:11:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:33.490 22:11:52 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.490 22:11:52 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.490 22:11:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.490 ************************************ 00:04:33.490 START TEST driver 00:04:33.490 ************************************ 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:33.490 * Looking for test storage... 00:04:33.490 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.490 22:11:52 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.490 --rc genhtml_branch_coverage=1 00:04:33.490 --rc genhtml_function_coverage=1 00:04:33.490 --rc genhtml_legend=1 00:04:33.490 --rc geninfo_all_blocks=1 00:04:33.490 --rc geninfo_unexecuted_blocks=1 00:04:33.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:33.490 ' 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.490 --rc genhtml_branch_coverage=1 00:04:33.490 --rc genhtml_function_coverage=1 00:04:33.490 --rc genhtml_legend=1 00:04:33.490 --rc geninfo_all_blocks=1 00:04:33.490 --rc geninfo_unexecuted_blocks=1 00:04:33.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:33.490 ' 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.490 --rc genhtml_branch_coverage=1 00:04:33.490 --rc genhtml_function_coverage=1 00:04:33.490 --rc genhtml_legend=1 00:04:33.490 --rc geninfo_all_blocks=1 00:04:33.490 --rc geninfo_unexecuted_blocks=1 00:04:33.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:33.490 ' 00:04:33.490 22:11:52 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.490 --rc genhtml_branch_coverage=1 00:04:33.490 --rc genhtml_function_coverage=1 00:04:33.490 --rc genhtml_legend=1 00:04:33.490 --rc geninfo_all_blocks=1 00:04:33.490 --rc geninfo_unexecuted_blocks=1 00:04:33.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:33.490 ' 00:04:33.490 22:11:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:33.490 22:11:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.490 22:11:52 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.799 22:11:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:38.799 22:11:57 setup.sh.driver -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.799 22:11:57 setup.sh.driver -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.799 22:11:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.799 ************************************ 00:04:38.799 START TEST guess_driver 00:04:38.799 ************************************ 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1127 -- # guess_driver 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:38.799 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:38.799 Looking for driver=vfio-pci 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.799 22:11:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.354 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.613 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.614 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.614 22:12:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.909 22:12:04 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.188 00:04:50.188 real 0m11.191s 00:04:50.188 user 0m2.504s 00:04:50.188 sys 0m4.883s 00:04:50.188 22:12:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.188 22:12:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.188 ************************************ 00:04:50.188 END TEST guess_driver 00:04:50.188 ************************************ 00:04:50.188 00:04:50.188 real 0m16.068s 00:04:50.188 user 0m3.933s 00:04:50.188 sys 0m7.585s 00:04:50.188 22:12:08 setup.sh.driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.188 22:12:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.188 ************************************ 00:04:50.188 END TEST driver 00:04:50.188 ************************************ 00:04:50.188 22:12:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:50.188 22:12:08 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.188 22:12:08 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.188 22:12:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.188 ************************************ 00:04:50.188 START TEST devices 00:04:50.188 ************************************ 00:04:50.188 22:12:08 setup.sh.devices -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:50.188 * Looking for test storage... 00:04:50.188 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.188 22:12:09 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.188 --rc genhtml_branch_coverage=1 00:04:50.188 --rc genhtml_function_coverage=1 00:04:50.188 --rc genhtml_legend=1 00:04:50.188 --rc geninfo_all_blocks=1 00:04:50.188 --rc geninfo_unexecuted_blocks=1 00:04:50.188 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.188 ' 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.188 --rc genhtml_branch_coverage=1 00:04:50.188 --rc genhtml_function_coverage=1 00:04:50.188 --rc genhtml_legend=1 00:04:50.188 --rc geninfo_all_blocks=1 00:04:50.188 --rc geninfo_unexecuted_blocks=1 00:04:50.188 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.188 ' 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.188 --rc genhtml_branch_coverage=1 00:04:50.188 --rc genhtml_function_coverage=1 00:04:50.188 --rc genhtml_legend=1 00:04:50.188 --rc geninfo_all_blocks=1 00:04:50.188 --rc geninfo_unexecuted_blocks=1 00:04:50.188 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.188 ' 00:04:50.188 22:12:09 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.188 --rc genhtml_branch_coverage=1 00:04:50.188 --rc genhtml_function_coverage=1 00:04:50.188 --rc genhtml_legend=1 00:04:50.188 --rc geninfo_all_blocks=1 00:04:50.188 --rc geninfo_unexecuted_blocks=1 00:04:50.188 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.188 ' 00:04:50.188 22:12:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.188 22:12:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.188 22:12:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.188 22:12:09 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.476 22:12:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.476 22:12:12 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:53.476 22:12:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:53.476 22:12:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:53.476 22:12:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:53.476 22:12:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:53.477 22:12:12 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:53.477 No valid GPT data, bailing 00:04:53.477 22:12:12 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:53.477 22:12:12 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:53.477 22:12:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:53.477 22:12:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:53.477 22:12:12 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.477 22:12:12 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.477 22:12:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.477 ************************************ 00:04:53.477 START TEST nvme_mount 00:04:53.477 ************************************ 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1127 -- # nvme_mount 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.477 22:12:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:54.415 Creating new GPT entries in memory. 00:04:54.415 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:54.415 other utilities. 00:04:54.415 22:12:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:54.415 22:12:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.415 22:12:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.415 22:12:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.415 22:12:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:55.793 Creating new GPT entries in memory. 00:04:55.793 The operation has completed successfully. 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3075102 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:55.794 22:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.794 22:12:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.082 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.082 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.082 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.082 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.082 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.082 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.341 22:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.635 22:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:05.928 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.928 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:05.928 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.929 22:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.929 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.929 00:05:05.929 real 0m12.261s 00:05:05.929 user 0m3.700s 00:05:05.929 sys 0m6.473s 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.929 22:12:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.929 ************************************ 00:05:05.929 END TEST nvme_mount 00:05:05.929 ************************************ 00:05:05.929 22:12:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:05.929 22:12:25 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.929 22:12:25 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.929 22:12:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.929 ************************************ 00:05:05.929 START TEST dm_mount 00:05:05.929 ************************************ 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1127 -- # dm_mount 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.929 22:12:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.867 Creating new GPT entries in memory. 00:05:06.867 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.867 other utilities. 00:05:06.867 22:12:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.867 22:12:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.867 22:12:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.867 22:12:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.867 22:12:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.806 Creating new GPT entries in memory. 00:05:07.806 The operation has completed successfully. 00:05:07.806 22:12:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.806 22:12:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.806 22:12:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.806 22:12:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.806 22:12:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:08.744 The operation has completed successfully. 00:05:08.744 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:08.744 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.744 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3078834 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.003 22:12:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.295 22:12:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.584 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:15.585 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:15.585 00:05:15.585 real 0m9.723s 00:05:15.585 user 0m2.386s 00:05:15.585 sys 0m4.410s 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.585 22:12:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.585 ************************************ 00:05:15.585 END TEST dm_mount 00:05:15.585 ************************************ 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.585 22:12:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.844 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.844 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.844 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.844 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.844 22:12:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:15.844 00:05:15.844 real 0m26.366s 00:05:15.844 user 0m7.625s 00:05:15.844 sys 0m13.658s 00:05:15.844 22:12:35 setup.sh.devices -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.844 22:12:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.844 ************************************ 00:05:15.844 END TEST devices 00:05:15.844 ************************************ 00:05:15.844 00:05:15.844 real 1m33.159s 00:05:15.844 user 0m27.928s 00:05:15.844 sys 0m49.496s 00:05:15.844 22:12:35 setup.sh -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.844 22:12:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:15.844 ************************************ 00:05:15.844 END TEST setup.sh 00:05:15.844 ************************************ 00:05:15.844 22:12:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:19.137 Hugepages 00:05:19.137 node hugesize free / total 00:05:19.137 node0 1048576kB 0 / 0 00:05:19.137 node0 2048kB 1024 / 1024 00:05:19.137 node1 1048576kB 0 / 0 00:05:19.137 node1 2048kB 1024 / 1024 00:05:19.137 00:05:19.137 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:19.137 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:19.137 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:19.137 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:19.137 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:19.137 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:19.397 22:12:38 -- spdk/autotest.sh@117 -- # uname -s 00:05:19.397 22:12:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:19.397 22:12:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:19.397 22:12:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:22.692 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.692 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:25.983 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:25.983 22:12:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:26.918 22:12:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:26.918 22:12:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:26.918 22:12:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.918 22:12:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:26.918 22:12:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:26.918 22:12:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:26.918 22:12:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.918 22:12:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.918 22:12:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:26.918 22:12:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:26.918 22:12:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:26.918 22:12:46 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.207 Waiting for block devices as requested 00:05:30.207 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:30.207 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:30.207 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:30.207 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:30.467 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:30.467 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:30.467 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:30.726 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:30.726 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:30.726 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:30.985 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:30.985 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:30.985 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:31.245 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:31.245 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:31.245 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:31.504 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:31.504 22:12:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:31.504 22:12:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:31.504 22:12:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:05:31.505 22:12:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:31.505 22:12:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:31.505 22:12:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:31.505 22:12:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:31.505 22:12:50 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:31.505 22:12:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:31.505 22:12:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:31.505 22:12:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:31.505 22:12:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:31.505 22:12:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:31.505 22:12:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:31.505 22:12:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:31.505 22:12:50 -- common/autotest_common.sh@1541 -- # continue 00:05:31.505 22:12:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:31.505 22:12:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.505 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.505 22:12:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:31.505 22:12:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.765 22:12:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.765 22:12:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:35.062 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:35.062 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:38.373 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:38.373 22:12:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:38.373 22:12:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.373 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.373 22:12:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:38.373 22:12:57 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:38.373 22:12:57 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.373 22:12:57 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:38.373 22:12:57 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:38.373 22:12:57 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:38.373 22:12:57 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:38.373 22:12:57 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:38.373 22:12:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:38.373 22:12:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:38.373 22:12:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.373 22:12:57 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:38.373 22:12:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:38.373 22:12:57 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:38.373 22:12:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:38.373 22:12:57 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:38.373 22:12:57 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:38.373 22:12:57 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:38.373 22:12:57 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:38.373 22:12:57 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:38.373 22:12:57 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:38.373 22:12:57 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:05:38.373 22:12:57 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:05:38.373 22:12:57 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3086955 00:05:38.373 22:12:57 -- common/autotest_common.sh@1583 -- # waitforlisten 3086955 00:05:38.373 22:12:57 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.373 22:12:57 -- common/autotest_common.sh@833 -- # '[' -z 3086955 ']' 00:05:38.373 22:12:57 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.373 22:12:57 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.373 22:12:57 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.373 22:12:57 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.373 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.373 [2024-10-29 22:12:57.777285] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:05:38.373 [2024-10-29 22:12:57.777389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086955 ] 00:05:38.373 [2024-10-29 22:12:57.865239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.633 [2024-10-29 22:12:57.914652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.633 22:12:58 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.633 22:12:58 -- common/autotest_common.sh@866 -- # return 0 00:05:38.633 22:12:58 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:38.633 22:12:58 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:38.633 22:12:58 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:41.926 nvme0n1 00:05:41.926 22:13:01 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:41.926 [2024-10-29 22:13:01.339472] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:41.926 request: 00:05:41.926 { 00:05:41.926 "nvme_ctrlr_name": "nvme0", 00:05:41.926 "password": "test", 00:05:41.926 "method": "bdev_nvme_opal_revert", 00:05:41.926 "req_id": 1 00:05:41.926 } 00:05:41.926 Got JSON-RPC error response 00:05:41.926 response: 00:05:41.926 { 00:05:41.926 "code": -32602, 00:05:41.926 "message": "Invalid parameters" 00:05:41.926 } 00:05:41.926 22:13:01 -- common/autotest_common.sh@1589 -- # true 00:05:41.926 22:13:01 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:41.926 22:13:01 -- common/autotest_common.sh@1593 -- # killprocess 3086955 00:05:41.926 22:13:01 -- common/autotest_common.sh@952 -- # '[' -z 3086955 ']' 00:05:41.926 22:13:01 -- common/autotest_common.sh@956 -- # kill -0 3086955 00:05:41.926 22:13:01 -- common/autotest_common.sh@957 -- # uname 00:05:41.926 22:13:01 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.926 22:13:01 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3086955 00:05:41.926 22:13:01 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.926 22:13:01 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.926 22:13:01 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3086955' 00:05:41.926 killing process with pid 3086955 00:05:41.926 22:13:01 -- common/autotest_common.sh@971 -- # kill 3086955 00:05:41.926 22:13:01 -- common/autotest_common.sh@976 -- # wait 3086955 00:05:46.120 22:13:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:46.120 22:13:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:46.120 22:13:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.120 22:13:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.120 22:13:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:46.120 22:13:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.120 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.120 22:13:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:46.120 22:13:05 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:46.120 22:13:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.120 22:13:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.120 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.120 ************************************ 00:05:46.120 START TEST env 00:05:46.120 ************************************ 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:46.120 * Looking for test storage... 00:05:46.120 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.120 22:13:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.120 22:13:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.120 22:13:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.120 22:13:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.120 22:13:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.120 22:13:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.120 22:13:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.120 22:13:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.120 22:13:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.120 22:13:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.120 22:13:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.120 22:13:05 env -- scripts/common.sh@344 -- # case "$op" in 00:05:46.120 22:13:05 env -- scripts/common.sh@345 -- # : 1 00:05:46.120 22:13:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.120 22:13:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.120 22:13:05 env -- scripts/common.sh@365 -- # decimal 1 00:05:46.120 22:13:05 env -- scripts/common.sh@353 -- # local d=1 00:05:46.120 22:13:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.120 22:13:05 env -- scripts/common.sh@355 -- # echo 1 00:05:46.120 22:13:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.120 22:13:05 env -- scripts/common.sh@366 -- # decimal 2 00:05:46.120 22:13:05 env -- scripts/common.sh@353 -- # local d=2 00:05:46.120 22:13:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.120 22:13:05 env -- scripts/common.sh@355 -- # echo 2 00:05:46.120 22:13:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.120 22:13:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.120 22:13:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.120 22:13:05 env -- scripts/common.sh@368 -- # return 0 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.120 --rc genhtml_branch_coverage=1 00:05:46.120 --rc genhtml_function_coverage=1 00:05:46.120 --rc genhtml_legend=1 00:05:46.120 --rc geninfo_all_blocks=1 00:05:46.120 --rc geninfo_unexecuted_blocks=1 00:05:46.120 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.120 ' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.120 --rc genhtml_branch_coverage=1 00:05:46.120 --rc genhtml_function_coverage=1 00:05:46.120 --rc genhtml_legend=1 00:05:46.120 --rc geninfo_all_blocks=1 00:05:46.120 --rc geninfo_unexecuted_blocks=1 00:05:46.120 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.120 ' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.120 --rc genhtml_branch_coverage=1 00:05:46.120 --rc genhtml_function_coverage=1 00:05:46.120 --rc genhtml_legend=1 00:05:46.120 --rc geninfo_all_blocks=1 00:05:46.120 --rc geninfo_unexecuted_blocks=1 00:05:46.120 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.120 ' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.120 --rc genhtml_branch_coverage=1 00:05:46.120 --rc genhtml_function_coverage=1 00:05:46.120 --rc genhtml_legend=1 00:05:46.120 --rc geninfo_all_blocks=1 00:05:46.120 --rc geninfo_unexecuted_blocks=1 00:05:46.120 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.120 ' 00:05:46.120 22:13:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.120 22:13:05 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.120 22:13:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.380 ************************************ 00:05:46.380 START TEST env_memory 00:05:46.380 ************************************ 00:05:46.380 22:13:05 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.380 00:05:46.380 00:05:46.380 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.380 http://cunit.sourceforge.net/ 00:05:46.380 00:05:46.380 00:05:46.380 Suite: memory 00:05:46.380 Test: alloc and free memory map ...[2024-10-29 22:13:05.689849] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.380 passed 00:05:46.380 Test: mem map translation ...[2024-10-29 22:13:05.703647] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.380 [2024-10-29 22:13:05.703667] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.380 [2024-10-29 22:13:05.703699] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.380 [2024-10-29 22:13:05.703708] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.380 passed 00:05:46.380 Test: mem map registration ...[2024-10-29 22:13:05.725443] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:46.381 [2024-10-29 22:13:05.725460] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:46.381 passed 00:05:46.381 Test: mem map adjacent registrations ...passed 00:05:46.381 00:05:46.381 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.381 suites 1 1 n/a 0 0 00:05:46.381 tests 4 4 4 0 0 00:05:46.381 asserts 152 152 152 0 n/a 00:05:46.381 00:05:46.381 Elapsed time = 0.089 seconds 00:05:46.381 00:05:46.381 real 0m0.102s 00:05:46.381 user 0m0.090s 00:05:46.381 sys 0m0.012s 00:05:46.381 22:13:05 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.381 22:13:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.381 ************************************ 00:05:46.381 END TEST env_memory 00:05:46.381 ************************************ 00:05:46.381 22:13:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.381 22:13:05 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.381 22:13:05 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.381 22:13:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.381 ************************************ 00:05:46.381 START TEST env_vtophys 00:05:46.381 ************************************ 00:05:46.381 22:13:05 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.381 EAL: lib.eal log level changed from notice to debug 00:05:46.381 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.381 EAL: Detected lcore 1 as core 1 on socket 0 00:05:46.381 EAL: Detected lcore 2 as core 2 on socket 0 00:05:46.381 EAL: Detected lcore 3 as core 3 on socket 0 00:05:46.381 EAL: Detected lcore 4 as core 4 on socket 0 00:05:46.381 EAL: Detected lcore 5 as core 8 on socket 0 00:05:46.381 EAL: Detected lcore 6 as core 9 on socket 0 00:05:46.381 EAL: Detected lcore 7 as core 10 on socket 0 00:05:46.381 EAL: Detected lcore 8 as core 11 on socket 0 00:05:46.381 EAL: Detected lcore 9 as core 16 on socket 0 00:05:46.381 EAL: Detected lcore 10 as core 17 on socket 0 00:05:46.381 EAL: Detected lcore 11 as core 18 on socket 0 00:05:46.381 EAL: Detected lcore 12 as core 19 on socket 0 00:05:46.381 EAL: Detected lcore 13 as core 20 on socket 0 00:05:46.381 EAL: Detected lcore 14 as core 24 on socket 0 00:05:46.381 EAL: Detected lcore 15 as core 25 on socket 0 00:05:46.381 EAL: Detected lcore 16 as core 26 on socket 0 00:05:46.381 EAL: Detected lcore 17 as core 27 on socket 0 00:05:46.381 EAL: Detected lcore 18 as core 0 on socket 1 00:05:46.381 EAL: Detected lcore 19 as core 1 on socket 1 00:05:46.381 EAL: Detected lcore 20 as core 2 on socket 1 00:05:46.381 EAL: Detected lcore 21 as core 3 on socket 1 00:05:46.381 EAL: Detected lcore 22 as core 4 on socket 1 00:05:46.381 EAL: Detected lcore 23 as core 8 on socket 1 00:05:46.381 EAL: Detected lcore 24 as core 9 on socket 1 00:05:46.381 EAL: Detected lcore 25 as core 10 on socket 1 00:05:46.381 EAL: Detected lcore 26 as core 11 on socket 1 00:05:46.381 EAL: Detected lcore 27 as core 16 on socket 1 00:05:46.381 EAL: Detected lcore 28 as core 17 on socket 1 00:05:46.381 EAL: Detected lcore 29 as core 18 on socket 1 00:05:46.381 EAL: Detected lcore 30 as core 19 on socket 1 00:05:46.381 EAL: Detected lcore 31 as core 20 on socket 1 00:05:46.381 EAL: Detected lcore 32 as core 24 on socket 1 00:05:46.381 EAL: Detected lcore 33 as core 25 on socket 1 00:05:46.381 EAL: Detected lcore 34 as core 26 on socket 1 00:05:46.381 EAL: Detected lcore 35 as core 27 on socket 1 00:05:46.381 EAL: Detected lcore 36 as core 0 on socket 0 00:05:46.381 EAL: Detected lcore 37 as core 1 on socket 0 00:05:46.381 EAL: Detected lcore 38 as core 2 on socket 0 00:05:46.381 EAL: Detected lcore 39 as core 3 on socket 0 00:05:46.381 EAL: Detected lcore 40 as core 4 on socket 0 00:05:46.381 EAL: Detected lcore 41 as core 8 on socket 0 00:05:46.381 EAL: Detected lcore 42 as core 9 on socket 0 00:05:46.381 EAL: Detected lcore 43 as core 10 on socket 0 00:05:46.381 EAL: Detected lcore 44 as core 11 on socket 0 00:05:46.381 EAL: Detected lcore 45 as core 16 on socket 0 00:05:46.381 EAL: Detected lcore 46 as core 17 on socket 0 00:05:46.381 EAL: Detected lcore 47 as core 18 on socket 0 00:05:46.381 EAL: Detected lcore 48 as core 19 on socket 0 00:05:46.381 EAL: Detected lcore 49 as core 20 on socket 0 00:05:46.381 EAL: Detected lcore 50 as core 24 on socket 0 00:05:46.381 EAL: Detected lcore 51 as core 25 on socket 0 00:05:46.381 EAL: Detected lcore 52 as core 26 on socket 0 00:05:46.381 EAL: Detected lcore 53 as core 27 on socket 0 00:05:46.381 EAL: Detected lcore 54 as core 0 on socket 1 00:05:46.381 EAL: Detected lcore 55 as core 1 on socket 1 00:05:46.381 EAL: Detected lcore 56 as core 2 on socket 1 00:05:46.381 EAL: Detected lcore 57 as core 3 on socket 1 00:05:46.381 EAL: Detected lcore 58 as core 4 on socket 1 00:05:46.381 EAL: Detected lcore 59 as core 8 on socket 1 00:05:46.381 EAL: Detected lcore 60 as core 9 on socket 1 00:05:46.381 EAL: Detected lcore 61 as core 10 on socket 1 00:05:46.381 EAL: Detected lcore 62 as core 11 on socket 1 00:05:46.381 EAL: Detected lcore 63 as core 16 on socket 1 00:05:46.381 EAL: Detected lcore 64 as core 17 on socket 1 00:05:46.381 EAL: Detected lcore 65 as core 18 on socket 1 00:05:46.381 EAL: Detected lcore 66 as core 19 on socket 1 00:05:46.381 EAL: Detected lcore 67 as core 20 on socket 1 00:05:46.381 EAL: Detected lcore 68 as core 24 on socket 1 00:05:46.381 EAL: Detected lcore 69 as core 25 on socket 1 00:05:46.381 EAL: Detected lcore 70 as core 26 on socket 1 00:05:46.381 EAL: Detected lcore 71 as core 27 on socket 1 00:05:46.381 EAL: Maximum logical cores by configuration: 128 00:05:46.381 EAL: Detected CPU lcores: 72 00:05:46.381 EAL: Detected NUMA nodes: 2 00:05:46.381 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:46.381 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:46.381 EAL: Checking presence of .so 'librte_eal.so' 00:05:46.381 EAL: Detected static linkage of DPDK 00:05:46.381 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.381 EAL: Bus pci wants IOVA as 'DC' 00:05:46.381 EAL: Buses did not request a specific IOVA mode. 00:05:46.381 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:46.381 EAL: Selected IOVA mode 'VA' 00:05:46.381 EAL: Probing VFIO support... 00:05:46.381 EAL: IOMMU type 1 (Type 1) is supported 00:05:46.381 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:46.381 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:46.381 EAL: VFIO support initialized 00:05:46.381 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.381 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.381 EAL: Setting up physically contiguous memory... 00:05:46.381 EAL: Setting maximum number of open files to 524288 00:05:46.381 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.381 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:46.381 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:46.381 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:46.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.381 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:46.381 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.381 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:46.381 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:46.381 EAL: Hugepages will be freed exactly as allocated. 00:05:46.381 EAL: No shared files mode enabled, IPC is disabled 00:05:46.381 EAL: No shared files mode enabled, IPC is disabled 00:05:46.381 EAL: TSC frequency is ~2300000 KHz 00:05:46.381 EAL: Main lcore 0 is ready (tid=7f167e7b3a00;cpuset=[0]) 00:05:46.381 EAL: Trying to obtain current memory policy. 00:05:46.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.381 EAL: Restoring previous memory policy: 0 00:05:46.381 EAL: request: mp_malloc_sync 00:05:46.381 EAL: No shared files mode enabled, IPC is disabled 00:05:46.381 EAL: Heap on socket 0 was expanded by 2MB 00:05:46.381 EAL: No shared files mode enabled, IPC is disabled 00:05:46.641 EAL: Mem event callback 'spdk:(nil)' registered 00:05:46.641 00:05:46.641 00:05:46.641 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.641 http://cunit.sourceforge.net/ 00:05:46.641 00:05:46.641 00:05:46.641 Suite: components_suite 00:05:46.641 Test: vtophys_malloc_test ...passed 00:05:46.641 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 4MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 4MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 6MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 6MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 10MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 10MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 18MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 18MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 34MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 34MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.642 EAL: Trying to obtain current memory policy. 00:05:46.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.642 EAL: Restoring previous memory policy: 4 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.642 EAL: request: mp_malloc_sync 00:05:46.642 EAL: No shared files mode enabled, IPC is disabled 00:05:46.642 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.901 EAL: request: mp_malloc_sync 00:05:46.901 EAL: No shared files mode enabled, IPC is disabled 00:05:46.901 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.901 EAL: Trying to obtain current memory policy. 00:05:46.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.901 EAL: Restoring previous memory policy: 4 00:05:46.901 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.901 EAL: request: mp_malloc_sync 00:05:46.901 EAL: No shared files mode enabled, IPC is disabled 00:05:46.901 EAL: Heap on socket 0 was expanded by 514MB 00:05:46.901 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.165 EAL: request: mp_malloc_sync 00:05:47.165 EAL: No shared files mode enabled, IPC is disabled 00:05:47.165 EAL: Heap on socket 0 was shrunk by 514MB 00:05:47.165 EAL: Trying to obtain current memory policy. 00:05:47.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.165 EAL: Restoring previous memory policy: 4 00:05:47.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.165 EAL: request: mp_malloc_sync 00:05:47.165 EAL: No shared files mode enabled, IPC is disabled 00:05:47.165 EAL: Heap on socket 0 was expanded by 1026MB 00:05:47.460 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.460 EAL: request: mp_malloc_sync 00:05:47.460 EAL: No shared files mode enabled, IPC is disabled 00:05:47.460 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:47.460 passed 00:05:47.460 00:05:47.460 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.460 suites 1 1 n/a 0 0 00:05:47.460 tests 2 2 2 0 0 00:05:47.461 asserts 497 497 497 0 n/a 00:05:47.461 00:05:47.461 Elapsed time = 0.987 seconds 00:05:47.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.461 EAL: request: mp_malloc_sync 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: Heap on socket 0 was shrunk by 2MB 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 EAL: No shared files mode enabled, IPC is disabled 00:05:47.461 00:05:47.461 real 0m1.119s 00:05:47.461 user 0m0.635s 00:05:47.461 sys 0m0.462s 00:05:47.461 22:13:06 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.461 22:13:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:47.461 ************************************ 00:05:47.461 END TEST env_vtophys 00:05:47.461 ************************************ 00:05:47.788 22:13:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:47.788 22:13:07 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.788 22:13:07 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.788 22:13:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.788 ************************************ 00:05:47.788 START TEST env_pci 00:05:47.788 ************************************ 00:05:47.788 22:13:07 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:47.788 00:05:47.788 00:05:47.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.788 http://cunit.sourceforge.net/ 00:05:47.788 00:05:47.788 00:05:47.788 Suite: pci 00:05:47.788 Test: pci_hook ...[2024-10-29 22:13:07.057280] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3088213 has claimed it 00:05:47.788 EAL: Cannot find device (10000:00:01.0) 00:05:47.788 EAL: Failed to attach device on primary process 00:05:47.788 passed 00:05:47.788 00:05:47.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.788 suites 1 1 n/a 0 0 00:05:47.788 tests 1 1 1 0 0 00:05:47.788 asserts 25 25 25 0 n/a 00:05:47.788 00:05:47.788 Elapsed time = 0.033 seconds 00:05:47.788 00:05:47.788 real 0m0.054s 00:05:47.788 user 0m0.019s 00:05:47.788 sys 0m0.035s 00:05:47.788 22:13:07 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.788 22:13:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.788 ************************************ 00:05:47.788 END TEST env_pci 00:05:47.788 ************************************ 00:05:47.788 22:13:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.788 22:13:07 env -- env/env.sh@15 -- # uname 00:05:47.788 22:13:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.788 22:13:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.788 22:13:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.788 22:13:07 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:47.788 22:13:07 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.788 22:13:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.788 ************************************ 00:05:47.788 START TEST env_dpdk_post_init 00:05:47.788 ************************************ 00:05:47.788 22:13:07 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.788 EAL: Detected CPU lcores: 72 00:05:47.788 EAL: Detected NUMA nodes: 2 00:05:47.788 EAL: Detected static linkage of DPDK 00:05:47.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.788 EAL: Selected IOVA mode 'VA' 00:05:47.788 EAL: VFIO support initialized 00:05:47.788 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.089 EAL: Using IOMMU type 1 (Type 1) 00:05:48.677 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:53.948 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:53.948 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:05:54.208 Starting DPDK initialization... 00:05:54.208 Starting SPDK post initialization... 00:05:54.208 SPDK NVMe probe 00:05:54.208 Attaching to 0000:5e:00.0 00:05:54.208 Attached to 0000:5e:00.0 00:05:54.208 Cleaning up... 00:05:54.208 00:05:54.208 real 0m6.508s 00:05:54.208 user 0m4.691s 00:05:54.208 sys 0m1.067s 00:05:54.208 22:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.208 22:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.209 ************************************ 00:05:54.209 END TEST env_dpdk_post_init 00:05:54.209 ************************************ 00:05:54.469 22:13:13 env -- env/env.sh@26 -- # uname 00:05:54.469 22:13:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:54.469 22:13:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.469 22:13:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.469 22:13:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.469 22:13:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.469 ************************************ 00:05:54.469 START TEST env_mem_callbacks 00:05:54.469 ************************************ 00:05:54.469 22:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.469 EAL: Detected CPU lcores: 72 00:05:54.469 EAL: Detected NUMA nodes: 2 00:05:54.469 EAL: Detected static linkage of DPDK 00:05:54.469 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:54.469 EAL: Selected IOVA mode 'VA' 00:05:54.469 EAL: VFIO support initialized 00:05:54.469 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.469 00:05:54.469 00:05:54.469 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.469 http://cunit.sourceforge.net/ 00:05:54.469 00:05:54.469 00:05:54.469 Suite: memory 00:05:54.469 Test: test ... 00:05:54.469 register 0x200000200000 2097152 00:05:54.469 malloc 3145728 00:05:54.469 register 0x200000400000 4194304 00:05:54.469 buf 0x200000500000 len 3145728 PASSED 00:05:54.469 malloc 64 00:05:54.469 buf 0x2000004fff40 len 64 PASSED 00:05:54.469 malloc 4194304 00:05:54.469 register 0x200000800000 6291456 00:05:54.469 buf 0x200000a00000 len 4194304 PASSED 00:05:54.469 free 0x200000500000 3145728 00:05:54.469 free 0x2000004fff40 64 00:05:54.469 unregister 0x200000400000 4194304 PASSED 00:05:54.469 free 0x200000a00000 4194304 00:05:54.469 unregister 0x200000800000 6291456 PASSED 00:05:54.469 malloc 8388608 00:05:54.469 register 0x200000400000 10485760 00:05:54.469 buf 0x200000600000 len 8388608 PASSED 00:05:54.469 free 0x200000600000 8388608 00:05:54.469 unregister 0x200000400000 10485760 PASSED 00:05:54.469 passed 00:05:54.469 00:05:54.469 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.469 suites 1 1 n/a 0 0 00:05:54.469 tests 1 1 1 0 0 00:05:54.469 asserts 15 15 15 0 n/a 00:05:54.469 00:05:54.469 Elapsed time = 0.008 seconds 00:05:54.469 00:05:54.469 real 0m0.070s 00:05:54.469 user 0m0.021s 00:05:54.469 sys 0m0.048s 00:05:54.469 22:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.469 22:13:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:54.469 ************************************ 00:05:54.469 END TEST env_mem_callbacks 00:05:54.469 ************************************ 00:05:54.469 00:05:54.469 real 0m8.475s 00:05:54.469 user 0m5.710s 00:05:54.469 sys 0m2.041s 00:05:54.469 22:13:13 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.469 22:13:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.469 ************************************ 00:05:54.469 END TEST env 00:05:54.469 ************************************ 00:05:54.469 22:13:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:54.469 22:13:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.469 22:13:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.469 22:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.469 ************************************ 00:05:54.469 START TEST rpc 00:05:54.469 ************************************ 00:05:54.469 22:13:13 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:54.729 * Looking for test storage... 00:05:54.729 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.729 22:13:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.729 22:13:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.729 22:13:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.729 22:13:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.729 22:13:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.729 22:13:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.729 22:13:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.729 22:13:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.729 22:13:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.729 22:13:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.729 22:13:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.729 22:13:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.729 22:13:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.729 22:13:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.729 22:13:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.729 --rc genhtml_branch_coverage=1 00:05:54.729 --rc genhtml_function_coverage=1 00:05:54.729 --rc genhtml_legend=1 00:05:54.729 --rc geninfo_all_blocks=1 00:05:54.729 --rc geninfo_unexecuted_blocks=1 00:05:54.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.729 ' 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.729 --rc genhtml_branch_coverage=1 00:05:54.729 --rc genhtml_function_coverage=1 00:05:54.729 --rc genhtml_legend=1 00:05:54.729 --rc geninfo_all_blocks=1 00:05:54.729 --rc geninfo_unexecuted_blocks=1 00:05:54.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.729 ' 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.729 --rc genhtml_branch_coverage=1 00:05:54.729 --rc genhtml_function_coverage=1 00:05:54.729 --rc genhtml_legend=1 00:05:54.729 --rc geninfo_all_blocks=1 00:05:54.729 --rc geninfo_unexecuted_blocks=1 00:05:54.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.729 ' 00:05:54.729 22:13:14 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.729 --rc genhtml_branch_coverage=1 00:05:54.730 --rc genhtml_function_coverage=1 00:05:54.730 --rc genhtml_legend=1 00:05:54.730 --rc geninfo_all_blocks=1 00:05:54.730 --rc geninfo_unexecuted_blocks=1 00:05:54.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.730 ' 00:05:54.730 22:13:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3089301 00:05:54.730 22:13:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.730 22:13:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:54.730 22:13:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3089301 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@833 -- # '[' -z 3089301 ']' 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.730 22:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 [2024-10-29 22:13:14.195630] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:05:54.730 [2024-10-29 22:13:14.195707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089301 ] 00:05:54.989 [2024-10-29 22:13:14.283469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.989 [2024-10-29 22:13:14.329950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.989 [2024-10-29 22:13:14.329991] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3089301' to capture a snapshot of events at runtime. 00:05:54.989 [2024-10-29 22:13:14.330001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.989 [2024-10-29 22:13:14.330011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.989 [2024-10-29 22:13:14.330018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3089301 for offline analysis/debug. 00:05:54.989 [2024-10-29 22:13:14.330412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.249 22:13:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.249 22:13:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:55.249 22:13:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:55.249 22:13:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:55.249 22:13:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:55.249 22:13:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:55.249 22:13:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.249 22:13:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.249 22:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 ************************************ 00:05:55.249 START TEST rpc_integrity 00:05:55.249 ************************************ 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.249 { 00:05:55.249 "name": "Malloc0", 00:05:55.249 "aliases": [ 00:05:55.249 "4a2d38cb-5bdc-4f16-82df-50a9c51f2c1e" 00:05:55.249 ], 00:05:55.249 "product_name": "Malloc disk", 00:05:55.249 "block_size": 512, 00:05:55.249 "num_blocks": 16384, 00:05:55.249 "uuid": "4a2d38cb-5bdc-4f16-82df-50a9c51f2c1e", 00:05:55.249 "assigned_rate_limits": { 00:05:55.249 "rw_ios_per_sec": 0, 00:05:55.249 "rw_mbytes_per_sec": 0, 00:05:55.249 "r_mbytes_per_sec": 0, 00:05:55.249 "w_mbytes_per_sec": 0 00:05:55.249 }, 00:05:55.249 "claimed": false, 00:05:55.249 "zoned": false, 00:05:55.249 "supported_io_types": { 00:05:55.249 "read": true, 00:05:55.249 "write": true, 00:05:55.249 "unmap": true, 00:05:55.249 "flush": true, 00:05:55.249 "reset": true, 00:05:55.249 "nvme_admin": false, 00:05:55.249 "nvme_io": false, 00:05:55.249 "nvme_io_md": false, 00:05:55.249 "write_zeroes": true, 00:05:55.249 "zcopy": true, 00:05:55.249 "get_zone_info": false, 00:05:55.249 "zone_management": false, 00:05:55.249 "zone_append": false, 00:05:55.249 "compare": false, 00:05:55.249 "compare_and_write": false, 00:05:55.249 "abort": true, 00:05:55.249 "seek_hole": false, 00:05:55.249 "seek_data": false, 00:05:55.249 "copy": true, 00:05:55.249 "nvme_iov_md": false 00:05:55.249 }, 00:05:55.249 "memory_domains": [ 00:05:55.249 { 00:05:55.249 "dma_device_id": "system", 00:05:55.249 "dma_device_type": 1 00:05:55.249 }, 00:05:55.249 { 00:05:55.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.249 "dma_device_type": 2 00:05:55.249 } 00:05:55.249 ], 00:05:55.249 "driver_specific": {} 00:05:55.249 } 00:05:55.249 ]' 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 [2024-10-29 22:13:14.724020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:55.249 [2024-10-29 22:13:14.724055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.249 [2024-10-29 22:13:14.724073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5c64770 00:05:55.249 [2024-10-29 22:13:14.724082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.249 [2024-10-29 22:13:14.725005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.249 [2024-10-29 22:13:14.725027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.249 Passthru0 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.249 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.249 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.249 { 00:05:55.249 "name": "Malloc0", 00:05:55.249 "aliases": [ 00:05:55.249 "4a2d38cb-5bdc-4f16-82df-50a9c51f2c1e" 00:05:55.249 ], 00:05:55.249 "product_name": "Malloc disk", 00:05:55.249 "block_size": 512, 00:05:55.249 "num_blocks": 16384, 00:05:55.249 "uuid": "4a2d38cb-5bdc-4f16-82df-50a9c51f2c1e", 00:05:55.250 "assigned_rate_limits": { 00:05:55.250 "rw_ios_per_sec": 0, 00:05:55.250 "rw_mbytes_per_sec": 0, 00:05:55.250 "r_mbytes_per_sec": 0, 00:05:55.250 "w_mbytes_per_sec": 0 00:05:55.250 }, 00:05:55.250 "claimed": true, 00:05:55.250 "claim_type": "exclusive_write", 00:05:55.250 "zoned": false, 00:05:55.250 "supported_io_types": { 00:05:55.250 "read": true, 00:05:55.250 "write": true, 00:05:55.250 "unmap": true, 00:05:55.250 "flush": true, 00:05:55.250 "reset": true, 00:05:55.250 "nvme_admin": false, 00:05:55.250 "nvme_io": false, 00:05:55.250 "nvme_io_md": false, 00:05:55.250 "write_zeroes": true, 00:05:55.250 "zcopy": true, 00:05:55.250 "get_zone_info": false, 00:05:55.250 "zone_management": false, 00:05:55.250 "zone_append": false, 00:05:55.250 "compare": false, 00:05:55.250 "compare_and_write": false, 00:05:55.250 "abort": true, 00:05:55.250 "seek_hole": false, 00:05:55.250 "seek_data": false, 00:05:55.250 "copy": true, 00:05:55.250 "nvme_iov_md": false 00:05:55.250 }, 00:05:55.250 "memory_domains": [ 00:05:55.250 { 00:05:55.250 "dma_device_id": "system", 00:05:55.250 "dma_device_type": 1 00:05:55.250 }, 00:05:55.250 { 00:05:55.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.250 "dma_device_type": 2 00:05:55.250 } 00:05:55.250 ], 00:05:55.250 "driver_specific": {} 00:05:55.250 }, 00:05:55.250 { 00:05:55.250 "name": "Passthru0", 00:05:55.250 "aliases": [ 00:05:55.250 "18517131-b551-51b8-a38f-ff7535330622" 00:05:55.250 ], 00:05:55.250 "product_name": "passthru", 00:05:55.250 "block_size": 512, 00:05:55.250 "num_blocks": 16384, 00:05:55.250 "uuid": "18517131-b551-51b8-a38f-ff7535330622", 00:05:55.250 "assigned_rate_limits": { 00:05:55.250 "rw_ios_per_sec": 0, 00:05:55.250 "rw_mbytes_per_sec": 0, 00:05:55.250 "r_mbytes_per_sec": 0, 00:05:55.250 "w_mbytes_per_sec": 0 00:05:55.250 }, 00:05:55.250 "claimed": false, 00:05:55.250 "zoned": false, 00:05:55.250 "supported_io_types": { 00:05:55.250 "read": true, 00:05:55.250 "write": true, 00:05:55.250 "unmap": true, 00:05:55.250 "flush": true, 00:05:55.250 "reset": true, 00:05:55.250 "nvme_admin": false, 00:05:55.250 "nvme_io": false, 00:05:55.250 "nvme_io_md": false, 00:05:55.250 "write_zeroes": true, 00:05:55.250 "zcopy": true, 00:05:55.250 "get_zone_info": false, 00:05:55.250 "zone_management": false, 00:05:55.250 "zone_append": false, 00:05:55.250 "compare": false, 00:05:55.250 "compare_and_write": false, 00:05:55.250 "abort": true, 00:05:55.250 "seek_hole": false, 00:05:55.250 "seek_data": false, 00:05:55.250 "copy": true, 00:05:55.250 "nvme_iov_md": false 00:05:55.250 }, 00:05:55.250 "memory_domains": [ 00:05:55.250 { 00:05:55.250 "dma_device_id": "system", 00:05:55.250 "dma_device_type": 1 00:05:55.250 }, 00:05:55.250 { 00:05:55.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.250 "dma_device_type": 2 00:05:55.250 } 00:05:55.250 ], 00:05:55.250 "driver_specific": { 00:05:55.250 "passthru": { 00:05:55.250 "name": "Passthru0", 00:05:55.250 "base_bdev_name": "Malloc0" 00:05:55.250 } 00:05:55.250 } 00:05:55.250 } 00:05:55.250 ]' 00:05:55.250 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.509 22:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.509 00:05:55.509 real 0m0.273s 00:05:55.509 user 0m0.167s 00:05:55.509 sys 0m0.048s 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.509 22:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.509 ************************************ 00:05:55.509 END TEST rpc_integrity 00:05:55.509 ************************************ 00:05:55.509 22:13:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.509 22:13:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.509 22:13:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.509 22:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.510 ************************************ 00:05:55.510 START TEST rpc_plugins 00:05:55.510 ************************************ 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:55.510 22:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.510 22:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.510 22:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.510 22:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.510 22:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.510 { 00:05:55.510 "name": "Malloc1", 00:05:55.510 "aliases": [ 00:05:55.510 "1ecfb0cb-d7bd-4910-a477-08cfde8d42ba" 00:05:55.510 ], 00:05:55.510 "product_name": "Malloc disk", 00:05:55.510 "block_size": 4096, 00:05:55.510 "num_blocks": 256, 00:05:55.510 "uuid": "1ecfb0cb-d7bd-4910-a477-08cfde8d42ba", 00:05:55.510 "assigned_rate_limits": { 00:05:55.510 "rw_ios_per_sec": 0, 00:05:55.510 "rw_mbytes_per_sec": 0, 00:05:55.510 "r_mbytes_per_sec": 0, 00:05:55.510 "w_mbytes_per_sec": 0 00:05:55.510 }, 00:05:55.510 "claimed": false, 00:05:55.510 "zoned": false, 00:05:55.510 "supported_io_types": { 00:05:55.510 "read": true, 00:05:55.510 "write": true, 00:05:55.510 "unmap": true, 00:05:55.510 "flush": true, 00:05:55.510 "reset": true, 00:05:55.510 "nvme_admin": false, 00:05:55.510 "nvme_io": false, 00:05:55.510 "nvme_io_md": false, 00:05:55.510 "write_zeroes": true, 00:05:55.510 "zcopy": true, 00:05:55.510 "get_zone_info": false, 00:05:55.510 "zone_management": false, 00:05:55.510 "zone_append": false, 00:05:55.510 "compare": false, 00:05:55.510 "compare_and_write": false, 00:05:55.510 "abort": true, 00:05:55.510 "seek_hole": false, 00:05:55.510 "seek_data": false, 00:05:55.510 "copy": true, 00:05:55.510 "nvme_iov_md": false 00:05:55.510 }, 00:05:55.510 "memory_domains": [ 00:05:55.510 { 00:05:55.510 "dma_device_id": "system", 00:05:55.510 "dma_device_type": 1 00:05:55.510 }, 00:05:55.510 { 00:05:55.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.510 "dma_device_type": 2 00:05:55.510 } 00:05:55.510 ], 00:05:55.510 "driver_specific": {} 00:05:55.510 } 00:05:55.510 ]' 00:05:55.510 22:13:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:55.510 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.510 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.510 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.510 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.510 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.510 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.510 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.510 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.769 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.769 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.769 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.769 22:13:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.769 00:05:55.769 real 0m0.149s 00:05:55.769 user 0m0.097s 00:05:55.769 sys 0m0.017s 00:05:55.769 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.769 22:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.769 ************************************ 00:05:55.769 END TEST rpc_plugins 00:05:55.769 ************************************ 00:05:55.769 22:13:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.769 22:13:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.769 22:13:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.769 22:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.769 ************************************ 00:05:55.769 START TEST rpc_trace_cmd_test 00:05:55.769 ************************************ 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:55.769 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3089301", 00:05:55.769 "tpoint_group_mask": "0x8", 00:05:55.769 "iscsi_conn": { 00:05:55.769 "mask": "0x2", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "scsi": { 00:05:55.769 "mask": "0x4", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "bdev": { 00:05:55.769 "mask": "0x8", 00:05:55.769 "tpoint_mask": "0xffffffffffffffff" 00:05:55.769 }, 00:05:55.769 "nvmf_rdma": { 00:05:55.769 "mask": "0x10", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "nvmf_tcp": { 00:05:55.769 "mask": "0x20", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "ftl": { 00:05:55.769 "mask": "0x40", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "blobfs": { 00:05:55.769 "mask": "0x80", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "dsa": { 00:05:55.769 "mask": "0x200", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "thread": { 00:05:55.769 "mask": "0x400", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "nvme_pcie": { 00:05:55.769 "mask": "0x800", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "iaa": { 00:05:55.769 "mask": "0x1000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "nvme_tcp": { 00:05:55.769 "mask": "0x2000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "bdev_nvme": { 00:05:55.769 "mask": "0x4000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "sock": { 00:05:55.769 "mask": "0x8000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "blob": { 00:05:55.769 "mask": "0x10000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "bdev_raid": { 00:05:55.769 "mask": "0x20000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 }, 00:05:55.769 "scheduler": { 00:05:55.769 "mask": "0x40000", 00:05:55.769 "tpoint_mask": "0x0" 00:05:55.769 } 00:05:55.769 }' 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.769 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:56.029 00:05:56.029 real 0m0.231s 00:05:56.029 user 0m0.186s 00:05:56.029 sys 0m0.036s 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.029 22:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.029 ************************************ 00:05:56.029 END TEST rpc_trace_cmd_test 00:05:56.029 ************************************ 00:05:56.029 22:13:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:56.029 22:13:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:56.029 22:13:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:56.029 22:13:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.029 22:13:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.029 22:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.029 ************************************ 00:05:56.029 START TEST rpc_daemon_integrity 00:05:56.029 ************************************ 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.029 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.288 { 00:05:56.288 "name": "Malloc2", 00:05:56.288 "aliases": [ 00:05:56.288 "2f7e7543-f357-4a65-b53c-c63dde93bd93" 00:05:56.288 ], 00:05:56.288 "product_name": "Malloc disk", 00:05:56.288 "block_size": 512, 00:05:56.288 "num_blocks": 16384, 00:05:56.288 "uuid": "2f7e7543-f357-4a65-b53c-c63dde93bd93", 00:05:56.288 "assigned_rate_limits": { 00:05:56.288 "rw_ios_per_sec": 0, 00:05:56.288 "rw_mbytes_per_sec": 0, 00:05:56.288 "r_mbytes_per_sec": 0, 00:05:56.288 "w_mbytes_per_sec": 0 00:05:56.288 }, 00:05:56.288 "claimed": false, 00:05:56.288 "zoned": false, 00:05:56.288 "supported_io_types": { 00:05:56.288 "read": true, 00:05:56.288 "write": true, 00:05:56.288 "unmap": true, 00:05:56.288 "flush": true, 00:05:56.288 "reset": true, 00:05:56.288 "nvme_admin": false, 00:05:56.288 "nvme_io": false, 00:05:56.288 "nvme_io_md": false, 00:05:56.288 "write_zeroes": true, 00:05:56.288 "zcopy": true, 00:05:56.288 "get_zone_info": false, 00:05:56.288 "zone_management": false, 00:05:56.288 "zone_append": false, 00:05:56.288 "compare": false, 00:05:56.288 "compare_and_write": false, 00:05:56.288 "abort": true, 00:05:56.288 "seek_hole": false, 00:05:56.288 "seek_data": false, 00:05:56.288 "copy": true, 00:05:56.288 "nvme_iov_md": false 00:05:56.288 }, 00:05:56.288 "memory_domains": [ 00:05:56.288 { 00:05:56.288 "dma_device_id": "system", 00:05:56.288 "dma_device_type": 1 00:05:56.288 }, 00:05:56.288 { 00:05:56.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.288 "dma_device_type": 2 00:05:56.288 } 00:05:56.288 ], 00:05:56.288 "driver_specific": {} 00:05:56.288 } 00:05:56.288 ]' 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 [2024-10-29 22:13:15.622334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:56.288 [2024-10-29 22:13:15.622367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.288 [2024-10-29 22:13:15.622384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5d86b30 00:05:56.288 [2024-10-29 22:13:15.622393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.288 [2024-10-29 22:13:15.623309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.288 [2024-10-29 22:13:15.623330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.288 Passthru0 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.288 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.289 { 00:05:56.289 "name": "Malloc2", 00:05:56.289 "aliases": [ 00:05:56.289 "2f7e7543-f357-4a65-b53c-c63dde93bd93" 00:05:56.289 ], 00:05:56.289 "product_name": "Malloc disk", 00:05:56.289 "block_size": 512, 00:05:56.289 "num_blocks": 16384, 00:05:56.289 "uuid": "2f7e7543-f357-4a65-b53c-c63dde93bd93", 00:05:56.289 "assigned_rate_limits": { 00:05:56.289 "rw_ios_per_sec": 0, 00:05:56.289 "rw_mbytes_per_sec": 0, 00:05:56.289 "r_mbytes_per_sec": 0, 00:05:56.289 "w_mbytes_per_sec": 0 00:05:56.289 }, 00:05:56.289 "claimed": true, 00:05:56.289 "claim_type": "exclusive_write", 00:05:56.289 "zoned": false, 00:05:56.289 "supported_io_types": { 00:05:56.289 "read": true, 00:05:56.289 "write": true, 00:05:56.289 "unmap": true, 00:05:56.289 "flush": true, 00:05:56.289 "reset": true, 00:05:56.289 "nvme_admin": false, 00:05:56.289 "nvme_io": false, 00:05:56.289 "nvme_io_md": false, 00:05:56.289 "write_zeroes": true, 00:05:56.289 "zcopy": true, 00:05:56.289 "get_zone_info": false, 00:05:56.289 "zone_management": false, 00:05:56.289 "zone_append": false, 00:05:56.289 "compare": false, 00:05:56.289 "compare_and_write": false, 00:05:56.289 "abort": true, 00:05:56.289 "seek_hole": false, 00:05:56.289 "seek_data": false, 00:05:56.289 "copy": true, 00:05:56.289 "nvme_iov_md": false 00:05:56.289 }, 00:05:56.289 "memory_domains": [ 00:05:56.289 { 00:05:56.289 "dma_device_id": "system", 00:05:56.289 "dma_device_type": 1 00:05:56.289 }, 00:05:56.289 { 00:05:56.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.289 "dma_device_type": 2 00:05:56.289 } 00:05:56.289 ], 00:05:56.289 "driver_specific": {} 00:05:56.289 }, 00:05:56.289 { 00:05:56.289 "name": "Passthru0", 00:05:56.289 "aliases": [ 00:05:56.289 "af84de90-bfcb-5065-94ad-f608935aab1f" 00:05:56.289 ], 00:05:56.289 "product_name": "passthru", 00:05:56.289 "block_size": 512, 00:05:56.289 "num_blocks": 16384, 00:05:56.289 "uuid": "af84de90-bfcb-5065-94ad-f608935aab1f", 00:05:56.289 "assigned_rate_limits": { 00:05:56.289 "rw_ios_per_sec": 0, 00:05:56.289 "rw_mbytes_per_sec": 0, 00:05:56.289 "r_mbytes_per_sec": 0, 00:05:56.289 "w_mbytes_per_sec": 0 00:05:56.289 }, 00:05:56.289 "claimed": false, 00:05:56.289 "zoned": false, 00:05:56.289 "supported_io_types": { 00:05:56.289 "read": true, 00:05:56.289 "write": true, 00:05:56.289 "unmap": true, 00:05:56.289 "flush": true, 00:05:56.289 "reset": true, 00:05:56.289 "nvme_admin": false, 00:05:56.289 "nvme_io": false, 00:05:56.289 "nvme_io_md": false, 00:05:56.289 "write_zeroes": true, 00:05:56.289 "zcopy": true, 00:05:56.289 "get_zone_info": false, 00:05:56.289 "zone_management": false, 00:05:56.289 "zone_append": false, 00:05:56.289 "compare": false, 00:05:56.289 "compare_and_write": false, 00:05:56.289 "abort": true, 00:05:56.289 "seek_hole": false, 00:05:56.289 "seek_data": false, 00:05:56.289 "copy": true, 00:05:56.289 "nvme_iov_md": false 00:05:56.289 }, 00:05:56.289 "memory_domains": [ 00:05:56.289 { 00:05:56.289 "dma_device_id": "system", 00:05:56.289 "dma_device_type": 1 00:05:56.289 }, 00:05:56.289 { 00:05:56.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.289 "dma_device_type": 2 00:05:56.289 } 00:05:56.289 ], 00:05:56.289 "driver_specific": { 00:05:56.289 "passthru": { 00:05:56.289 "name": "Passthru0", 00:05:56.289 "base_bdev_name": "Malloc2" 00:05:56.289 } 00:05:56.289 } 00:05:56.289 } 00:05:56.289 ]' 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.289 00:05:56.289 real 0m0.299s 00:05:56.289 user 0m0.183s 00:05:56.289 sys 0m0.056s 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.289 22:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.289 ************************************ 00:05:56.289 END TEST rpc_daemon_integrity 00:05:56.289 ************************************ 00:05:56.549 22:13:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.549 22:13:15 rpc -- rpc/rpc.sh@84 -- # killprocess 3089301 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@952 -- # '[' -z 3089301 ']' 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@956 -- # kill -0 3089301 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@957 -- # uname 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3089301 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3089301' 00:05:56.549 killing process with pid 3089301 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@971 -- # kill 3089301 00:05:56.549 22:13:15 rpc -- common/autotest_common.sh@976 -- # wait 3089301 00:05:56.809 00:05:56.809 real 0m2.189s 00:05:56.809 user 0m2.753s 00:05:56.809 sys 0m0.822s 00:05:56.809 22:13:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.809 22:13:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.809 ************************************ 00:05:56.809 END TEST rpc 00:05:56.809 ************************************ 00:05:56.809 22:13:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:56.809 22:13:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.809 22:13:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.809 22:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:56.809 ************************************ 00:05:56.809 START TEST skip_rpc 00:05:56.809 ************************************ 00:05:56.809 22:13:16 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:57.068 * Looking for test storage... 00:05:57.068 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.068 22:13:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.068 22:13:16 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.069 --rc genhtml_branch_coverage=1 00:05:57.069 --rc genhtml_function_coverage=1 00:05:57.069 --rc genhtml_legend=1 00:05:57.069 --rc geninfo_all_blocks=1 00:05:57.069 --rc geninfo_unexecuted_blocks=1 00:05:57.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.069 ' 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.069 --rc genhtml_branch_coverage=1 00:05:57.069 --rc genhtml_function_coverage=1 00:05:57.069 --rc genhtml_legend=1 00:05:57.069 --rc geninfo_all_blocks=1 00:05:57.069 --rc geninfo_unexecuted_blocks=1 00:05:57.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.069 ' 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.069 --rc genhtml_branch_coverage=1 00:05:57.069 --rc genhtml_function_coverage=1 00:05:57.069 --rc genhtml_legend=1 00:05:57.069 --rc geninfo_all_blocks=1 00:05:57.069 --rc geninfo_unexecuted_blocks=1 00:05:57.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.069 ' 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.069 --rc genhtml_branch_coverage=1 00:05:57.069 --rc genhtml_function_coverage=1 00:05:57.069 --rc genhtml_legend=1 00:05:57.069 --rc geninfo_all_blocks=1 00:05:57.069 --rc geninfo_unexecuted_blocks=1 00:05:57.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.069 ' 00:05:57.069 22:13:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:57.069 22:13:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:57.069 22:13:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.069 22:13:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.069 ************************************ 00:05:57.069 START TEST skip_rpc 00:05:57.069 ************************************ 00:05:57.069 22:13:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:57.069 22:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3089775 00:05:57.069 22:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:57.069 22:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.069 22:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:57.069 [2024-10-29 22:13:16.528578] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:05:57.069 [2024-10-29 22:13:16.528635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089775 ] 00:05:57.328 [2024-10-29 22:13:16.615896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.328 [2024-10-29 22:13:16.660759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3089775 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3089775 ']' 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3089775 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3089775 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3089775' 00:06:02.611 killing process with pid 3089775 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3089775 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3089775 00:06:02.611 00:06:02.611 real 0m5.367s 00:06:02.611 user 0m5.106s 00:06:02.611 sys 0m0.297s 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.611 22:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.611 ************************************ 00:06:02.611 END TEST skip_rpc 00:06:02.611 ************************************ 00:06:02.611 22:13:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:02.611 22:13:21 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.611 22:13:21 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.611 22:13:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.611 ************************************ 00:06:02.611 START TEST skip_rpc_with_json 00:06:02.611 ************************************ 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3090548 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3090548 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3090548 ']' 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.611 22:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.611 [2024-10-29 22:13:21.976540] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:02.611 [2024-10-29 22:13:21.976602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090548 ] 00:06:02.611 [2024-10-29 22:13:22.062627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.611 [2024-10-29 22:13:22.106769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.870 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.870 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:02.870 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.870 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.870 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.870 [2024-10-29 22:13:22.340357] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.870 request: 00:06:02.870 { 00:06:02.870 "trtype": "tcp", 00:06:02.870 "method": "nvmf_get_transports", 00:06:02.870 "req_id": 1 00:06:02.870 } 00:06:02.870 Got JSON-RPC error response 00:06:02.870 response: 00:06:02.871 { 00:06:02.871 "code": -19, 00:06:02.871 "message": "No such device" 00:06:02.871 } 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.871 [2024-10-29 22:13:22.352451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.871 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.130 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.130 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:03.130 { 00:06:03.130 "subsystems": [ 00:06:03.130 { 00:06:03.130 "subsystem": "scheduler", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "framework_set_scheduler", 00:06:03.130 "params": { 00:06:03.130 "name": "static" 00:06:03.130 } 00:06:03.130 } 00:06:03.130 ] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "vmd", 00:06:03.130 "config": [] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "sock", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "sock_set_default_impl", 00:06:03.130 "params": { 00:06:03.130 "impl_name": "posix" 00:06:03.130 } 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "method": "sock_impl_set_options", 00:06:03.130 "params": { 00:06:03.130 "impl_name": "ssl", 00:06:03.130 "recv_buf_size": 4096, 00:06:03.130 "send_buf_size": 4096, 00:06:03.130 "enable_recv_pipe": true, 00:06:03.130 "enable_quickack": false, 00:06:03.130 "enable_placement_id": 0, 00:06:03.130 "enable_zerocopy_send_server": true, 00:06:03.130 "enable_zerocopy_send_client": false, 00:06:03.130 "zerocopy_threshold": 0, 00:06:03.130 "tls_version": 0, 00:06:03.130 "enable_ktls": false 00:06:03.130 } 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "method": "sock_impl_set_options", 00:06:03.130 "params": { 00:06:03.130 "impl_name": "posix", 00:06:03.130 "recv_buf_size": 2097152, 00:06:03.130 "send_buf_size": 2097152, 00:06:03.130 "enable_recv_pipe": true, 00:06:03.130 "enable_quickack": false, 00:06:03.130 "enable_placement_id": 0, 00:06:03.130 "enable_zerocopy_send_server": true, 00:06:03.130 "enable_zerocopy_send_client": false, 00:06:03.130 "zerocopy_threshold": 0, 00:06:03.130 "tls_version": 0, 00:06:03.130 "enable_ktls": false 00:06:03.130 } 00:06:03.130 } 00:06:03.130 ] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "iobuf", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "iobuf_set_options", 00:06:03.130 "params": { 00:06:03.130 "small_pool_count": 8192, 00:06:03.130 "large_pool_count": 1024, 00:06:03.130 "small_bufsize": 8192, 00:06:03.130 "large_bufsize": 135168, 00:06:03.130 "enable_numa": false 00:06:03.130 } 00:06:03.130 } 00:06:03.130 ] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "keyring", 00:06:03.130 "config": [] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "vfio_user_target", 00:06:03.130 "config": null 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "fsdev", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "fsdev_set_opts", 00:06:03.130 "params": { 00:06:03.130 "fsdev_io_pool_size": 65535, 00:06:03.130 "fsdev_io_cache_size": 256 00:06:03.130 } 00:06:03.130 } 00:06:03.130 ] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "accel", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "accel_set_options", 00:06:03.130 "params": { 00:06:03.130 "small_cache_size": 128, 00:06:03.130 "large_cache_size": 16, 00:06:03.130 "task_count": 2048, 00:06:03.130 "sequence_count": 2048, 00:06:03.130 "buf_count": 2048 00:06:03.130 } 00:06:03.130 } 00:06:03.130 ] 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "subsystem": "bdev", 00:06:03.130 "config": [ 00:06:03.130 { 00:06:03.130 "method": "bdev_set_options", 00:06:03.130 "params": { 00:06:03.130 "bdev_io_pool_size": 65535, 00:06:03.130 "bdev_io_cache_size": 256, 00:06:03.130 "bdev_auto_examine": true, 00:06:03.130 "iobuf_small_cache_size": 128, 00:06:03.130 "iobuf_large_cache_size": 16 00:06:03.130 } 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "method": "bdev_raid_set_options", 00:06:03.130 "params": { 00:06:03.130 "process_window_size_kb": 1024, 00:06:03.130 "process_max_bandwidth_mb_sec": 0 00:06:03.130 } 00:06:03.130 }, 00:06:03.130 { 00:06:03.130 "method": "bdev_nvme_set_options", 00:06:03.130 "params": { 00:06:03.130 "action_on_timeout": "none", 00:06:03.130 "timeout_us": 0, 00:06:03.130 "timeout_admin_us": 0, 00:06:03.130 "keep_alive_timeout_ms": 10000, 00:06:03.130 "arbitration_burst": 0, 00:06:03.130 "low_priority_weight": 0, 00:06:03.130 "medium_priority_weight": 0, 00:06:03.130 "high_priority_weight": 0, 00:06:03.130 "nvme_adminq_poll_period_us": 10000, 00:06:03.130 "nvme_ioq_poll_period_us": 0, 00:06:03.130 "io_queue_requests": 0, 00:06:03.130 "delay_cmd_submit": true, 00:06:03.130 "transport_retry_count": 4, 00:06:03.130 "bdev_retry_count": 3, 00:06:03.130 "transport_ack_timeout": 0, 00:06:03.130 "ctrlr_loss_timeout_sec": 0, 00:06:03.130 "reconnect_delay_sec": 0, 00:06:03.130 "fast_io_fail_timeout_sec": 0, 00:06:03.130 "disable_auto_failback": false, 00:06:03.130 "generate_uuids": false, 00:06:03.130 "transport_tos": 0, 00:06:03.130 "nvme_error_stat": false, 00:06:03.130 "rdma_srq_size": 0, 00:06:03.130 "io_path_stat": false, 00:06:03.130 "allow_accel_sequence": false, 00:06:03.130 "rdma_max_cq_size": 0, 00:06:03.130 "rdma_cm_event_timeout_ms": 0, 00:06:03.130 "dhchap_digests": [ 00:06:03.130 "sha256", 00:06:03.130 "sha384", 00:06:03.130 "sha512" 00:06:03.131 ], 00:06:03.131 "dhchap_dhgroups": [ 00:06:03.131 "null", 00:06:03.131 "ffdhe2048", 00:06:03.131 "ffdhe3072", 00:06:03.131 "ffdhe4096", 00:06:03.131 "ffdhe6144", 00:06:03.131 "ffdhe8192" 00:06:03.131 ] 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "bdev_nvme_set_hotplug", 00:06:03.131 "params": { 00:06:03.131 "period_us": 100000, 00:06:03.131 "enable": false 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "bdev_iscsi_set_options", 00:06:03.131 "params": { 00:06:03.131 "timeout_sec": 30 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "bdev_wait_for_examine" 00:06:03.131 } 00:06:03.131 ] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "nvmf", 00:06:03.131 "config": [ 00:06:03.131 { 00:06:03.131 "method": "nvmf_set_config", 00:06:03.131 "params": { 00:06:03.131 "discovery_filter": "match_any", 00:06:03.131 "admin_cmd_passthru": { 00:06:03.131 "identify_ctrlr": false 00:06:03.131 }, 00:06:03.131 "dhchap_digests": [ 00:06:03.131 "sha256", 00:06:03.131 "sha384", 00:06:03.131 "sha512" 00:06:03.131 ], 00:06:03.131 "dhchap_dhgroups": [ 00:06:03.131 "null", 00:06:03.131 "ffdhe2048", 00:06:03.131 "ffdhe3072", 00:06:03.131 "ffdhe4096", 00:06:03.131 "ffdhe6144", 00:06:03.131 "ffdhe8192" 00:06:03.131 ] 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "nvmf_set_max_subsystems", 00:06:03.131 "params": { 00:06:03.131 "max_subsystems": 1024 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "nvmf_set_crdt", 00:06:03.131 "params": { 00:06:03.131 "crdt1": 0, 00:06:03.131 "crdt2": 0, 00:06:03.131 "crdt3": 0 00:06:03.131 } 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "method": "nvmf_create_transport", 00:06:03.131 "params": { 00:06:03.131 "trtype": "TCP", 00:06:03.131 "max_queue_depth": 128, 00:06:03.131 "max_io_qpairs_per_ctrlr": 127, 00:06:03.131 "in_capsule_data_size": 4096, 00:06:03.131 "max_io_size": 131072, 00:06:03.131 "io_unit_size": 131072, 00:06:03.131 "max_aq_depth": 128, 00:06:03.131 "num_shared_buffers": 511, 00:06:03.131 "buf_cache_size": 4294967295, 00:06:03.131 "dif_insert_or_strip": false, 00:06:03.131 "zcopy": false, 00:06:03.131 "c2h_success": true, 00:06:03.131 "sock_priority": 0, 00:06:03.131 "abort_timeout_sec": 1, 00:06:03.131 "ack_timeout": 0, 00:06:03.131 "data_wr_pool_size": 0 00:06:03.131 } 00:06:03.131 } 00:06:03.131 ] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "nbd", 00:06:03.131 "config": [] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "ublk", 00:06:03.131 "config": [] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "vhost_blk", 00:06:03.131 "config": [] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "scsi", 00:06:03.131 "config": null 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "iscsi", 00:06:03.131 "config": [ 00:06:03.131 { 00:06:03.131 "method": "iscsi_set_options", 00:06:03.131 "params": { 00:06:03.131 "node_base": "iqn.2016-06.io.spdk", 00:06:03.131 "max_sessions": 128, 00:06:03.131 "max_connections_per_session": 2, 00:06:03.131 "max_queue_depth": 64, 00:06:03.131 "default_time2wait": 2, 00:06:03.131 "default_time2retain": 20, 00:06:03.131 "first_burst_length": 8192, 00:06:03.131 "immediate_data": true, 00:06:03.131 "allow_duplicated_isid": false, 00:06:03.131 "error_recovery_level": 0, 00:06:03.131 "nop_timeout": 60, 00:06:03.131 "nop_in_interval": 30, 00:06:03.131 "disable_chap": false, 00:06:03.131 "require_chap": false, 00:06:03.131 "mutual_chap": false, 00:06:03.131 "chap_group": 0, 00:06:03.131 "max_large_datain_per_connection": 64, 00:06:03.131 "max_r2t_per_connection": 4, 00:06:03.131 "pdu_pool_size": 36864, 00:06:03.131 "immediate_data_pool_size": 16384, 00:06:03.131 "data_out_pool_size": 2048 00:06:03.131 } 00:06:03.131 } 00:06:03.131 ] 00:06:03.131 }, 00:06:03.131 { 00:06:03.131 "subsystem": "vhost_scsi", 00:06:03.131 "config": [] 00:06:03.131 } 00:06:03.131 ] 00:06:03.131 } 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3090548 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3090548 ']' 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3090548 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3090548 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3090548' 00:06:03.131 killing process with pid 3090548 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3090548 00:06:03.131 22:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3090548 00:06:03.390 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3090583 00:06:03.391 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:03.391 22:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3090583 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3090583 ']' 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3090583 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3090583 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3090583' 00:06:08.661 killing process with pid 3090583 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3090583 00:06:08.661 22:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3090583 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:08.919 00:06:08.919 real 0m6.308s 00:06:08.919 user 0m5.970s 00:06:08.919 sys 0m0.660s 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.919 ************************************ 00:06:08.919 END TEST skip_rpc_with_json 00:06:08.919 ************************************ 00:06:08.919 22:13:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.919 22:13:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.919 22:13:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.919 22:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.919 ************************************ 00:06:08.919 START TEST skip_rpc_with_delay 00:06:08.919 ************************************ 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.919 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.920 [2024-10-29 22:13:28.374703] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.920 00:06:08.920 real 0m0.046s 00:06:08.920 user 0m0.022s 00:06:08.920 sys 0m0.024s 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.920 22:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 ************************************ 00:06:08.920 END TEST skip_rpc_with_delay 00:06:08.920 ************************************ 00:06:08.920 22:13:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.920 22:13:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.920 22:13:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.920 22:13:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.920 22:13:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.920 22:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.177 ************************************ 00:06:09.178 START TEST exit_on_failed_rpc_init 00:06:09.178 ************************************ 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3091394 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3091394 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3091394 ']' 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.178 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.178 [2024-10-29 22:13:28.508538] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:09.178 [2024-10-29 22:13:28.508603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091394 ] 00:06:09.178 [2024-10-29 22:13:28.595529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.178 [2024-10-29 22:13:28.644400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.436 22:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.436 [2024-10-29 22:13:28.898159] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:09.436 [2024-10-29 22:13:28.898246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091511 ] 00:06:09.694 [2024-10-29 22:13:28.983352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.694 [2024-10-29 22:13:29.028012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.694 [2024-10-29 22:13:29.028088] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.694 [2024-10-29 22:13:29.028101] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.694 [2024-10-29 22:13:29.028109] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3091394 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3091394 ']' 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3091394 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3091394 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.694 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.695 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3091394' 00:06:09.695 killing process with pid 3091394 00:06:09.695 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3091394 00:06:09.695 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3091394 00:06:09.954 00:06:09.954 real 0m0.934s 00:06:09.954 user 0m0.950s 00:06:09.954 sys 0m0.431s 00:06:09.954 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.954 22:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.954 ************************************ 00:06:09.954 END TEST exit_on_failed_rpc_init 00:06:09.954 ************************************ 00:06:09.954 22:13:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:09.954 00:06:09.954 real 0m13.216s 00:06:09.954 user 0m12.295s 00:06:09.954 sys 0m1.769s 00:06:09.954 22:13:29 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.954 22:13:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.954 ************************************ 00:06:09.954 END TEST skip_rpc 00:06:09.954 ************************************ 00:06:10.214 22:13:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.214 22:13:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.214 22:13:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.214 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.214 ************************************ 00:06:10.214 START TEST rpc_client 00:06:10.214 ************************************ 00:06:10.214 22:13:29 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.214 * Looking for test storage... 00:06:10.214 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:10.214 22:13:29 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.214 22:13:29 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.214 22:13:29 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.214 22:13:29 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.214 22:13:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:10.215 22:13:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.474 22:13:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.474 --rc genhtml_branch_coverage=1 00:06:10.474 --rc genhtml_function_coverage=1 00:06:10.474 --rc genhtml_legend=1 00:06:10.474 --rc geninfo_all_blocks=1 00:06:10.474 --rc geninfo_unexecuted_blocks=1 00:06:10.474 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.474 ' 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.474 --rc genhtml_branch_coverage=1 00:06:10.474 --rc genhtml_function_coverage=1 00:06:10.474 --rc genhtml_legend=1 00:06:10.474 --rc geninfo_all_blocks=1 00:06:10.474 --rc geninfo_unexecuted_blocks=1 00:06:10.474 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.474 ' 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.474 --rc genhtml_branch_coverage=1 00:06:10.474 --rc genhtml_function_coverage=1 00:06:10.474 --rc genhtml_legend=1 00:06:10.474 --rc geninfo_all_blocks=1 00:06:10.474 --rc geninfo_unexecuted_blocks=1 00:06:10.474 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.474 ' 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.474 --rc genhtml_branch_coverage=1 00:06:10.474 --rc genhtml_function_coverage=1 00:06:10.474 --rc genhtml_legend=1 00:06:10.474 --rc geninfo_all_blocks=1 00:06:10.474 --rc geninfo_unexecuted_blocks=1 00:06:10.474 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.474 ' 00:06:10.474 22:13:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:10.474 OK 00:06:10.474 22:13:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.474 00:06:10.474 real 0m0.220s 00:06:10.474 user 0m0.120s 00:06:10.474 sys 0m0.118s 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.474 22:13:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.474 ************************************ 00:06:10.474 END TEST rpc_client 00:06:10.474 ************************************ 00:06:10.474 22:13:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.474 22:13:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.474 22:13:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.474 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.474 ************************************ 00:06:10.474 START TEST json_config 00:06:10.474 ************************************ 00:06:10.474 22:13:29 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.474 22:13:29 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.474 22:13:29 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.474 22:13:29 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.474 22:13:29 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.474 22:13:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.474 22:13:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.474 22:13:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.474 22:13:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.474 22:13:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.474 22:13:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.474 22:13:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.474 22:13:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.474 22:13:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.474 22:13:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:10.474 22:13:29 json_config -- scripts/common.sh@345 -- # : 1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.474 22:13:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.474 22:13:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@353 -- # local d=1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.474 22:13:29 json_config -- scripts/common.sh@355 -- # echo 1 00:06:10.474 22:13:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.733 22:13:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:10.733 22:13:29 json_config -- scripts/common.sh@353 -- # local d=2 00:06:10.733 22:13:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.733 22:13:30 json_config -- scripts/common.sh@355 -- # echo 2 00:06:10.733 22:13:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.733 22:13:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.733 22:13:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.733 22:13:30 json_config -- scripts/common.sh@368 -- # return 0 00:06:10.733 22:13:30 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.733 22:13:30 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.733 --rc genhtml_branch_coverage=1 00:06:10.733 --rc genhtml_function_coverage=1 00:06:10.733 --rc genhtml_legend=1 00:06:10.733 --rc geninfo_all_blocks=1 00:06:10.733 --rc geninfo_unexecuted_blocks=1 00:06:10.733 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.733 ' 00:06:10.733 22:13:30 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.733 --rc genhtml_branch_coverage=1 00:06:10.733 --rc genhtml_function_coverage=1 00:06:10.733 --rc genhtml_legend=1 00:06:10.733 --rc geninfo_all_blocks=1 00:06:10.733 --rc geninfo_unexecuted_blocks=1 00:06:10.733 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.733 ' 00:06:10.733 22:13:30 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.733 --rc genhtml_branch_coverage=1 00:06:10.733 --rc genhtml_function_coverage=1 00:06:10.733 --rc genhtml_legend=1 00:06:10.733 --rc geninfo_all_blocks=1 00:06:10.733 --rc geninfo_unexecuted_blocks=1 00:06:10.733 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.733 ' 00:06:10.733 22:13:30 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.733 --rc genhtml_branch_coverage=1 00:06:10.733 --rc genhtml_function_coverage=1 00:06:10.733 --rc genhtml_legend=1 00:06:10.733 --rc geninfo_all_blocks=1 00:06:10.733 --rc geninfo_unexecuted_blocks=1 00:06:10.733 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.733 ' 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:10.734 22:13:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.734 22:13:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.734 22:13:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.734 22:13:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.734 22:13:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.734 22:13:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.734 22:13:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.734 22:13:30 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.734 22:13:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@51 -- # : 0 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.734 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.734 22:13:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:10.734 WARNING: No tests are enabled so not running JSON configuration tests 00:06:10.734 22:13:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:10.734 00:06:10.734 real 0m0.195s 00:06:10.734 user 0m0.111s 00:06:10.734 sys 0m0.092s 00:06:10.734 22:13:30 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.734 22:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.734 ************************************ 00:06:10.734 END TEST json_config 00:06:10.734 ************************************ 00:06:10.734 22:13:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.734 22:13:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.734 22:13:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.734 22:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.734 ************************************ 00:06:10.734 START TEST json_config_extra_key 00:06:10.734 ************************************ 00:06:10.734 22:13:30 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.734 22:13:30 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.734 22:13:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.734 22:13:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.994 22:13:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.994 --rc genhtml_branch_coverage=1 00:06:10.994 --rc genhtml_function_coverage=1 00:06:10.994 --rc genhtml_legend=1 00:06:10.994 --rc geninfo_all_blocks=1 00:06:10.994 --rc geninfo_unexecuted_blocks=1 00:06:10.994 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.994 ' 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.994 --rc genhtml_branch_coverage=1 00:06:10.994 --rc genhtml_function_coverage=1 00:06:10.994 --rc genhtml_legend=1 00:06:10.994 --rc geninfo_all_blocks=1 00:06:10.994 --rc geninfo_unexecuted_blocks=1 00:06:10.994 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.994 ' 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.994 --rc genhtml_branch_coverage=1 00:06:10.994 --rc genhtml_function_coverage=1 00:06:10.994 --rc genhtml_legend=1 00:06:10.994 --rc geninfo_all_blocks=1 00:06:10.994 --rc geninfo_unexecuted_blocks=1 00:06:10.994 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.994 ' 00:06:10.994 22:13:30 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.994 --rc genhtml_branch_coverage=1 00:06:10.994 --rc genhtml_function_coverage=1 00:06:10.994 --rc genhtml_legend=1 00:06:10.994 --rc geninfo_all_blocks=1 00:06:10.994 --rc geninfo_unexecuted_blocks=1 00:06:10.994 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:10.994 ' 00:06:10.994 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.994 22:13:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:10.995 22:13:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.995 22:13:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.995 22:13:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.995 22:13:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.995 22:13:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.995 22:13:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.995 22:13:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.995 22:13:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.995 22:13:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.995 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.995 22:13:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.995 INFO: launching applications... 00:06:10.995 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3091855 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.995 Waiting for target to run... 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3091855 /var/tmp/spdk_tgt.sock 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3091855 ']' 00:06:10.995 22:13:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.995 22:13:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.995 [2024-10-29 22:13:30.357237] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:10.995 [2024-10-29 22:13:30.357340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091855 ] 00:06:11.564 [2024-10-29 22:13:30.833020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.564 [2024-10-29 22:13:30.881157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.824 22:13:31 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.824 22:13:31 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.824 00:06:11.824 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.824 INFO: shutting down applications... 00:06:11.824 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3091855 ]] 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3091855 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091855 00:06:11.824 22:13:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091855 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.392 22:13:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.392 SPDK target shutdown done 00:06:12.392 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.392 Success 00:06:12.392 00:06:12.393 real 0m1.624s 00:06:12.393 user 0m1.195s 00:06:12.393 sys 0m0.630s 00:06:12.393 22:13:31 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.393 22:13:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.393 ************************************ 00:06:12.393 END TEST json_config_extra_key 00:06:12.393 ************************************ 00:06:12.393 22:13:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.393 22:13:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.393 22:13:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.393 22:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:12.393 ************************************ 00:06:12.393 START TEST alias_rpc 00:06:12.393 ************************************ 00:06:12.393 22:13:31 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.653 * Looking for test storage... 00:06:12.653 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:12.653 22:13:31 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.653 22:13:31 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.653 22:13:31 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.653 22:13:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.653 --rc genhtml_branch_coverage=1 00:06:12.653 --rc genhtml_function_coverage=1 00:06:12.653 --rc genhtml_legend=1 00:06:12.653 --rc geninfo_all_blocks=1 00:06:12.653 --rc geninfo_unexecuted_blocks=1 00:06:12.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.653 ' 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.653 --rc genhtml_branch_coverage=1 00:06:12.653 --rc genhtml_function_coverage=1 00:06:12.653 --rc genhtml_legend=1 00:06:12.653 --rc geninfo_all_blocks=1 00:06:12.653 --rc geninfo_unexecuted_blocks=1 00:06:12.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.653 ' 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.653 --rc genhtml_branch_coverage=1 00:06:12.653 --rc genhtml_function_coverage=1 00:06:12.653 --rc genhtml_legend=1 00:06:12.653 --rc geninfo_all_blocks=1 00:06:12.653 --rc geninfo_unexecuted_blocks=1 00:06:12.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.653 ' 00:06:12.653 22:13:32 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.653 --rc genhtml_branch_coverage=1 00:06:12.653 --rc genhtml_function_coverage=1 00:06:12.653 --rc genhtml_legend=1 00:06:12.653 --rc geninfo_all_blocks=1 00:06:12.653 --rc geninfo_unexecuted_blocks=1 00:06:12.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.653 ' 00:06:12.653 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.653 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.653 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3092096 00:06:12.654 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3092096 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3092096 ']' 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.654 22:13:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.654 [2024-10-29 22:13:32.035707] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:12.654 [2024-10-29 22:13:32.035776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092096 ] 00:06:12.654 [2024-10-29 22:13:32.101526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.654 [2024-10-29 22:13:32.145315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.913 22:13:32 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.913 22:13:32 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:12.913 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:13.173 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3092096 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3092096 ']' 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3092096 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3092096 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3092096' 00:06:13.173 killing process with pid 3092096 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@971 -- # kill 3092096 00:06:13.173 22:13:32 alias_rpc -- common/autotest_common.sh@976 -- # wait 3092096 00:06:13.433 00:06:13.433 real 0m1.128s 00:06:13.433 user 0m1.152s 00:06:13.433 sys 0m0.439s 00:06:13.433 22:13:32 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.433 22:13:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.433 ************************************ 00:06:13.433 END TEST alias_rpc 00:06:13.433 ************************************ 00:06:13.694 22:13:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:13.694 22:13:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.694 22:13:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.694 22:13:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.694 22:13:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.694 ************************************ 00:06:13.694 START TEST spdkcli_tcp 00:06:13.694 ************************************ 00:06:13.694 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.694 * Looking for test storage... 00:06:13.694 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:13.694 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:13.694 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:13.694 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:13.694 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.694 22:13:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.954 22:13:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.954 --rc genhtml_branch_coverage=1 00:06:13.954 --rc genhtml_function_coverage=1 00:06:13.954 --rc genhtml_legend=1 00:06:13.954 --rc geninfo_all_blocks=1 00:06:13.954 --rc geninfo_unexecuted_blocks=1 00:06:13.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:13.954 ' 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.954 --rc genhtml_branch_coverage=1 00:06:13.954 --rc genhtml_function_coverage=1 00:06:13.954 --rc genhtml_legend=1 00:06:13.954 --rc geninfo_all_blocks=1 00:06:13.954 --rc geninfo_unexecuted_blocks=1 00:06:13.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:13.954 ' 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.954 --rc genhtml_branch_coverage=1 00:06:13.954 --rc genhtml_function_coverage=1 00:06:13.954 --rc genhtml_legend=1 00:06:13.954 --rc geninfo_all_blocks=1 00:06:13.954 --rc geninfo_unexecuted_blocks=1 00:06:13.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:13.954 ' 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.954 --rc genhtml_branch_coverage=1 00:06:13.954 --rc genhtml_function_coverage=1 00:06:13.954 --rc genhtml_legend=1 00:06:13.954 --rc geninfo_all_blocks=1 00:06:13.954 --rc geninfo_unexecuted_blocks=1 00:06:13.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:13.954 ' 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3092329 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3092329 00:06:13.954 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3092329 ']' 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.954 22:13:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.954 [2024-10-29 22:13:33.263379] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:13.954 [2024-10-29 22:13:33.263447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092329 ] 00:06:13.954 [2024-10-29 22:13:33.348223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.954 [2024-10-29 22:13:33.393880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.954 [2024-10-29 22:13:33.393893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.214 22:13:33 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.214 22:13:33 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:14.214 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3092333 00:06:14.214 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.214 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.476 [ 00:06:14.476 "spdk_get_version", 00:06:14.476 "rpc_get_methods", 00:06:14.476 "notify_get_notifications", 00:06:14.476 "notify_get_types", 00:06:14.476 "trace_get_info", 00:06:14.476 "trace_get_tpoint_group_mask", 00:06:14.476 "trace_disable_tpoint_group", 00:06:14.476 "trace_enable_tpoint_group", 00:06:14.476 "trace_clear_tpoint_mask", 00:06:14.476 "trace_set_tpoint_mask", 00:06:14.476 "fsdev_set_opts", 00:06:14.476 "fsdev_get_opts", 00:06:14.476 "framework_get_pci_devices", 00:06:14.476 "framework_get_config", 00:06:14.476 "framework_get_subsystems", 00:06:14.476 "vfu_tgt_set_base_path", 00:06:14.476 "keyring_get_keys", 00:06:14.476 "iobuf_get_stats", 00:06:14.476 "iobuf_set_options", 00:06:14.476 "sock_get_default_impl", 00:06:14.476 "sock_set_default_impl", 00:06:14.476 "sock_impl_set_options", 00:06:14.476 "sock_impl_get_options", 00:06:14.476 "vmd_rescan", 00:06:14.476 "vmd_remove_device", 00:06:14.476 "vmd_enable", 00:06:14.476 "accel_get_stats", 00:06:14.476 "accel_set_options", 00:06:14.476 "accel_set_driver", 00:06:14.476 "accel_crypto_key_destroy", 00:06:14.476 "accel_crypto_keys_get", 00:06:14.476 "accel_crypto_key_create", 00:06:14.476 "accel_assign_opc", 00:06:14.476 "accel_get_module_info", 00:06:14.476 "accel_get_opc_assignments", 00:06:14.476 "bdev_get_histogram", 00:06:14.476 "bdev_enable_histogram", 00:06:14.476 "bdev_set_qos_limit", 00:06:14.476 "bdev_set_qd_sampling_period", 00:06:14.476 "bdev_get_bdevs", 00:06:14.476 "bdev_reset_iostat", 00:06:14.476 "bdev_get_iostat", 00:06:14.476 "bdev_examine", 00:06:14.476 "bdev_wait_for_examine", 00:06:14.476 "bdev_set_options", 00:06:14.476 "scsi_get_devices", 00:06:14.476 "thread_set_cpumask", 00:06:14.476 "scheduler_set_options", 00:06:14.476 "framework_get_governor", 00:06:14.476 "framework_get_scheduler", 00:06:14.476 "framework_set_scheduler", 00:06:14.476 "framework_get_reactors", 00:06:14.476 "thread_get_io_channels", 00:06:14.476 "thread_get_pollers", 00:06:14.476 "thread_get_stats", 00:06:14.476 "framework_monitor_context_switch", 00:06:14.476 "spdk_kill_instance", 00:06:14.476 "log_enable_timestamps", 00:06:14.476 "log_get_flags", 00:06:14.476 "log_clear_flag", 00:06:14.476 "log_set_flag", 00:06:14.476 "log_get_level", 00:06:14.476 "log_set_level", 00:06:14.476 "log_get_print_level", 00:06:14.476 "log_set_print_level", 00:06:14.476 "framework_enable_cpumask_locks", 00:06:14.476 "framework_disable_cpumask_locks", 00:06:14.476 "framework_wait_init", 00:06:14.476 "framework_start_init", 00:06:14.476 "virtio_blk_create_transport", 00:06:14.476 "virtio_blk_get_transports", 00:06:14.476 "vhost_controller_set_coalescing", 00:06:14.476 "vhost_get_controllers", 00:06:14.476 "vhost_delete_controller", 00:06:14.476 "vhost_create_blk_controller", 00:06:14.476 "vhost_scsi_controller_remove_target", 00:06:14.476 "vhost_scsi_controller_add_target", 00:06:14.476 "vhost_start_scsi_controller", 00:06:14.476 "vhost_create_scsi_controller", 00:06:14.476 "ublk_recover_disk", 00:06:14.476 "ublk_get_disks", 00:06:14.476 "ublk_stop_disk", 00:06:14.476 "ublk_start_disk", 00:06:14.476 "ublk_destroy_target", 00:06:14.476 "ublk_create_target", 00:06:14.476 "nbd_get_disks", 00:06:14.476 "nbd_stop_disk", 00:06:14.476 "nbd_start_disk", 00:06:14.476 "env_dpdk_get_mem_stats", 00:06:14.476 "nvmf_stop_mdns_prr", 00:06:14.476 "nvmf_publish_mdns_prr", 00:06:14.476 "nvmf_subsystem_get_listeners", 00:06:14.476 "nvmf_subsystem_get_qpairs", 00:06:14.476 "nvmf_subsystem_get_controllers", 00:06:14.476 "nvmf_get_stats", 00:06:14.476 "nvmf_get_transports", 00:06:14.476 "nvmf_create_transport", 00:06:14.476 "nvmf_get_targets", 00:06:14.476 "nvmf_delete_target", 00:06:14.476 "nvmf_create_target", 00:06:14.476 "nvmf_subsystem_allow_any_host", 00:06:14.476 "nvmf_subsystem_set_keys", 00:06:14.476 "nvmf_subsystem_remove_host", 00:06:14.476 "nvmf_subsystem_add_host", 00:06:14.476 "nvmf_ns_remove_host", 00:06:14.476 "nvmf_ns_add_host", 00:06:14.476 "nvmf_subsystem_remove_ns", 00:06:14.476 "nvmf_subsystem_set_ns_ana_group", 00:06:14.476 "nvmf_subsystem_add_ns", 00:06:14.476 "nvmf_subsystem_listener_set_ana_state", 00:06:14.476 "nvmf_discovery_get_referrals", 00:06:14.476 "nvmf_discovery_remove_referral", 00:06:14.476 "nvmf_discovery_add_referral", 00:06:14.476 "nvmf_subsystem_remove_listener", 00:06:14.476 "nvmf_subsystem_add_listener", 00:06:14.476 "nvmf_delete_subsystem", 00:06:14.476 "nvmf_create_subsystem", 00:06:14.476 "nvmf_get_subsystems", 00:06:14.476 "nvmf_set_crdt", 00:06:14.476 "nvmf_set_config", 00:06:14.476 "nvmf_set_max_subsystems", 00:06:14.476 "iscsi_get_histogram", 00:06:14.476 "iscsi_enable_histogram", 00:06:14.476 "iscsi_set_options", 00:06:14.476 "iscsi_get_auth_groups", 00:06:14.476 "iscsi_auth_group_remove_secret", 00:06:14.476 "iscsi_auth_group_add_secret", 00:06:14.476 "iscsi_delete_auth_group", 00:06:14.476 "iscsi_create_auth_group", 00:06:14.476 "iscsi_set_discovery_auth", 00:06:14.476 "iscsi_get_options", 00:06:14.476 "iscsi_target_node_request_logout", 00:06:14.476 "iscsi_target_node_set_redirect", 00:06:14.476 "iscsi_target_node_set_auth", 00:06:14.476 "iscsi_target_node_add_lun", 00:06:14.476 "iscsi_get_stats", 00:06:14.476 "iscsi_get_connections", 00:06:14.476 "iscsi_portal_group_set_auth", 00:06:14.476 "iscsi_start_portal_group", 00:06:14.476 "iscsi_delete_portal_group", 00:06:14.476 "iscsi_create_portal_group", 00:06:14.476 "iscsi_get_portal_groups", 00:06:14.476 "iscsi_delete_target_node", 00:06:14.476 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.476 "iscsi_target_node_add_pg_ig_maps", 00:06:14.476 "iscsi_create_target_node", 00:06:14.476 "iscsi_get_target_nodes", 00:06:14.476 "iscsi_delete_initiator_group", 00:06:14.476 "iscsi_initiator_group_remove_initiators", 00:06:14.476 "iscsi_initiator_group_add_initiators", 00:06:14.476 "iscsi_create_initiator_group", 00:06:14.476 "iscsi_get_initiator_groups", 00:06:14.476 "fsdev_aio_delete", 00:06:14.476 "fsdev_aio_create", 00:06:14.476 "keyring_linux_set_options", 00:06:14.476 "keyring_file_remove_key", 00:06:14.476 "keyring_file_add_key", 00:06:14.476 "vfu_virtio_create_fs_endpoint", 00:06:14.476 "vfu_virtio_create_scsi_endpoint", 00:06:14.476 "vfu_virtio_scsi_remove_target", 00:06:14.476 "vfu_virtio_scsi_add_target", 00:06:14.476 "vfu_virtio_create_blk_endpoint", 00:06:14.476 "vfu_virtio_delete_endpoint", 00:06:14.476 "iaa_scan_accel_module", 00:06:14.476 "dsa_scan_accel_module", 00:06:14.476 "ioat_scan_accel_module", 00:06:14.476 "accel_error_inject_error", 00:06:14.476 "bdev_iscsi_delete", 00:06:14.476 "bdev_iscsi_create", 00:06:14.476 "bdev_iscsi_set_options", 00:06:14.476 "bdev_virtio_attach_controller", 00:06:14.476 "bdev_virtio_scsi_get_devices", 00:06:14.476 "bdev_virtio_detach_controller", 00:06:14.476 "bdev_virtio_blk_set_hotplug", 00:06:14.476 "bdev_ftl_set_property", 00:06:14.476 "bdev_ftl_get_properties", 00:06:14.476 "bdev_ftl_get_stats", 00:06:14.476 "bdev_ftl_unmap", 00:06:14.476 "bdev_ftl_unload", 00:06:14.476 "bdev_ftl_delete", 00:06:14.476 "bdev_ftl_load", 00:06:14.476 "bdev_ftl_create", 00:06:14.476 "bdev_aio_delete", 00:06:14.476 "bdev_aio_rescan", 00:06:14.476 "bdev_aio_create", 00:06:14.476 "blobfs_create", 00:06:14.476 "blobfs_detect", 00:06:14.476 "blobfs_set_cache_size", 00:06:14.476 "bdev_zone_block_delete", 00:06:14.476 "bdev_zone_block_create", 00:06:14.476 "bdev_delay_delete", 00:06:14.476 "bdev_delay_create", 00:06:14.476 "bdev_delay_update_latency", 00:06:14.476 "bdev_split_delete", 00:06:14.476 "bdev_split_create", 00:06:14.476 "bdev_error_inject_error", 00:06:14.476 "bdev_error_delete", 00:06:14.476 "bdev_error_create", 00:06:14.476 "bdev_raid_set_options", 00:06:14.476 "bdev_raid_remove_base_bdev", 00:06:14.477 "bdev_raid_add_base_bdev", 00:06:14.477 "bdev_raid_delete", 00:06:14.477 "bdev_raid_create", 00:06:14.477 "bdev_raid_get_bdevs", 00:06:14.477 "bdev_lvol_set_parent_bdev", 00:06:14.477 "bdev_lvol_set_parent", 00:06:14.477 "bdev_lvol_check_shallow_copy", 00:06:14.477 "bdev_lvol_start_shallow_copy", 00:06:14.477 "bdev_lvol_grow_lvstore", 00:06:14.477 "bdev_lvol_get_lvols", 00:06:14.477 "bdev_lvol_get_lvstores", 00:06:14.477 "bdev_lvol_delete", 00:06:14.477 "bdev_lvol_set_read_only", 00:06:14.477 "bdev_lvol_resize", 00:06:14.477 "bdev_lvol_decouple_parent", 00:06:14.477 "bdev_lvol_inflate", 00:06:14.477 "bdev_lvol_rename", 00:06:14.477 "bdev_lvol_clone_bdev", 00:06:14.477 "bdev_lvol_clone", 00:06:14.477 "bdev_lvol_snapshot", 00:06:14.477 "bdev_lvol_create", 00:06:14.477 "bdev_lvol_delete_lvstore", 00:06:14.477 "bdev_lvol_rename_lvstore", 00:06:14.477 "bdev_lvol_create_lvstore", 00:06:14.477 "bdev_passthru_delete", 00:06:14.477 "bdev_passthru_create", 00:06:14.477 "bdev_nvme_cuse_unregister", 00:06:14.477 "bdev_nvme_cuse_register", 00:06:14.477 "bdev_opal_new_user", 00:06:14.477 "bdev_opal_set_lock_state", 00:06:14.477 "bdev_opal_delete", 00:06:14.477 "bdev_opal_get_info", 00:06:14.477 "bdev_opal_create", 00:06:14.477 "bdev_nvme_opal_revert", 00:06:14.477 "bdev_nvme_opal_init", 00:06:14.477 "bdev_nvme_send_cmd", 00:06:14.477 "bdev_nvme_set_keys", 00:06:14.477 "bdev_nvme_get_path_iostat", 00:06:14.477 "bdev_nvme_get_mdns_discovery_info", 00:06:14.477 "bdev_nvme_stop_mdns_discovery", 00:06:14.477 "bdev_nvme_start_mdns_discovery", 00:06:14.477 "bdev_nvme_set_multipath_policy", 00:06:14.477 "bdev_nvme_set_preferred_path", 00:06:14.477 "bdev_nvme_get_io_paths", 00:06:14.477 "bdev_nvme_remove_error_injection", 00:06:14.477 "bdev_nvme_add_error_injection", 00:06:14.477 "bdev_nvme_get_discovery_info", 00:06:14.477 "bdev_nvme_stop_discovery", 00:06:14.477 "bdev_nvme_start_discovery", 00:06:14.477 "bdev_nvme_get_controller_health_info", 00:06:14.477 "bdev_nvme_disable_controller", 00:06:14.477 "bdev_nvme_enable_controller", 00:06:14.477 "bdev_nvme_reset_controller", 00:06:14.477 "bdev_nvme_get_transport_statistics", 00:06:14.477 "bdev_nvme_apply_firmware", 00:06:14.477 "bdev_nvme_detach_controller", 00:06:14.477 "bdev_nvme_get_controllers", 00:06:14.477 "bdev_nvme_attach_controller", 00:06:14.477 "bdev_nvme_set_hotplug", 00:06:14.477 "bdev_nvme_set_options", 00:06:14.477 "bdev_null_resize", 00:06:14.477 "bdev_null_delete", 00:06:14.477 "bdev_null_create", 00:06:14.477 "bdev_malloc_delete", 00:06:14.477 "bdev_malloc_create" 00:06:14.477 ] 00:06:14.477 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.477 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.477 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3092329 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3092329 ']' 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3092329 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3092329 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3092329' 00:06:14.477 killing process with pid 3092329 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3092329 00:06:14.477 22:13:33 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3092329 00:06:14.736 00:06:14.736 real 0m1.185s 00:06:14.736 user 0m1.956s 00:06:14.736 sys 0m0.506s 00:06:14.736 22:13:34 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.736 22:13:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.736 ************************************ 00:06:14.736 END TEST spdkcli_tcp 00:06:14.736 ************************************ 00:06:14.737 22:13:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.997 22:13:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.997 22:13:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.997 22:13:34 -- common/autotest_common.sh@10 -- # set +x 00:06:14.997 ************************************ 00:06:14.997 START TEST dpdk_mem_utility 00:06:14.997 ************************************ 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.997 * Looking for test storage... 00:06:14.997 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.997 22:13:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.997 --rc genhtml_branch_coverage=1 00:06:14.997 --rc genhtml_function_coverage=1 00:06:14.997 --rc genhtml_legend=1 00:06:14.997 --rc geninfo_all_blocks=1 00:06:14.997 --rc geninfo_unexecuted_blocks=1 00:06:14.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.997 ' 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.997 --rc genhtml_branch_coverage=1 00:06:14.997 --rc genhtml_function_coverage=1 00:06:14.997 --rc genhtml_legend=1 00:06:14.997 --rc geninfo_all_blocks=1 00:06:14.997 --rc geninfo_unexecuted_blocks=1 00:06:14.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.997 ' 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.997 --rc genhtml_branch_coverage=1 00:06:14.997 --rc genhtml_function_coverage=1 00:06:14.997 --rc genhtml_legend=1 00:06:14.997 --rc geninfo_all_blocks=1 00:06:14.997 --rc geninfo_unexecuted_blocks=1 00:06:14.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.997 ' 00:06:14.997 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.997 --rc genhtml_branch_coverage=1 00:06:14.997 --rc genhtml_function_coverage=1 00:06:14.997 --rc genhtml_legend=1 00:06:14.997 --rc geninfo_all_blocks=1 00:06:14.997 --rc geninfo_unexecuted_blocks=1 00:06:14.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.997 ' 00:06:14.997 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.997 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3092579 00:06:14.997 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.997 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3092579 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3092579 ']' 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.998 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.257 [2024-10-29 22:13:34.521049] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:15.257 [2024-10-29 22:13:34.521114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092579 ] 00:06:15.257 [2024-10-29 22:13:34.604843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.257 [2024-10-29 22:13:34.651614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.518 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.518 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:15.518 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:15.518 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:15.518 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.518 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 { 00:06:15.518 "filename": "/tmp/spdk_mem_dump.txt" 00:06:15.518 } 00:06:15.518 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.518 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.518 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:15.518 1 heaps totaling size 810.000000 MiB 00:06:15.518 size: 810.000000 MiB heap id: 0 00:06:15.518 end heaps---------- 00:06:15.518 9 mempools totaling size 595.772034 MiB 00:06:15.518 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:15.518 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:15.518 size: 92.545471 MiB name: bdev_io_3092579 00:06:15.518 size: 50.003479 MiB name: msgpool_3092579 00:06:15.518 size: 36.509338 MiB name: fsdev_io_3092579 00:06:15.518 size: 21.763794 MiB name: PDU_Pool 00:06:15.518 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:15.518 size: 4.133484 MiB name: evtpool_3092579 00:06:15.518 size: 0.026123 MiB name: Session_Pool 00:06:15.518 end mempools------- 00:06:15.518 6 memzones totaling size 4.142822 MiB 00:06:15.518 size: 1.000366 MiB name: RG_ring_0_3092579 00:06:15.518 size: 1.000366 MiB name: RG_ring_1_3092579 00:06:15.518 size: 1.000366 MiB name: RG_ring_4_3092579 00:06:15.518 size: 1.000366 MiB name: RG_ring_5_3092579 00:06:15.518 size: 0.125366 MiB name: RG_ring_2_3092579 00:06:15.518 size: 0.015991 MiB name: RG_ring_3_3092579 00:06:15.518 end memzones------- 00:06:15.518 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:15.518 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:15.518 list of free elements. size: 10.862488 MiB 00:06:15.518 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:15.518 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:15.518 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:15.518 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:15.518 element at address: 0x200008000000 with size: 0.959839 MiB 00:06:15.518 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:15.518 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:15.518 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:15.518 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:15.518 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:15.518 element at address: 0x200003e00000 with size: 0.490723 MiB 00:06:15.518 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:15.518 element at address: 0x200010600000 with size: 0.481934 MiB 00:06:15.518 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:15.518 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:15.518 list of standard malloc elements. size: 199.218628 MiB 00:06:15.518 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:06:15.518 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:06:15.518 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:15.518 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:15.518 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:15.518 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:15.518 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:15.518 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:15.518 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:15.518 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20000085b100 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000008df880 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20001067b600 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:15.518 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:15.518 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:15.518 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:15.518 list of memzone associated elements. size: 599.918884 MiB 00:06:15.518 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:15.518 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:15.518 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:15.518 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:15.518 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:15.518 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3092579_0 00:06:15.518 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:15.518 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3092579_0 00:06:15.518 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:06:15.518 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3092579_0 00:06:15.518 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:15.518 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:15.518 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:15.518 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:15.518 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:15.519 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3092579_0 00:06:15.519 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:15.519 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3092579 00:06:15.519 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:15.519 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3092579 00:06:15.519 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:06:15.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:15.519 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:15.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:15.519 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:06:15.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:15.519 element at address: 0x200003efde40 with size: 1.008118 MiB 00:06:15.519 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:15.519 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:15.519 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3092579 00:06:15.519 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:15.519 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3092579 00:06:15.519 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:15.519 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3092579 00:06:15.519 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:15.519 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3092579 00:06:15.519 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:06:15.519 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3092579 00:06:15.519 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:15.519 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3092579 00:06:15.519 element at address: 0x20001067b780 with size: 0.500488 MiB 00:06:15.519 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:15.519 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:06:15.519 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:15.519 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:15.519 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:15.519 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:15.519 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3092579 00:06:15.519 element at address: 0x2000008df940 with size: 0.125488 MiB 00:06:15.519 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3092579 00:06:15.519 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:06:15.519 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:15.519 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:15.519 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:15.519 element at address: 0x2000008db680 with size: 0.016113 MiB 00:06:15.519 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3092579 00:06:15.519 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:15.519 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:15.519 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:15.519 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3092579 00:06:15.519 element at address: 0x2000008db480 with size: 0.000305 MiB 00:06:15.519 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3092579 00:06:15.519 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:15.519 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3092579 00:06:15.519 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:15.519 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:15.519 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:15.519 22:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3092579 00:06:15.519 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3092579 ']' 00:06:15.519 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3092579 00:06:15.519 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:15.519 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.519 22:13:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3092579 00:06:15.519 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.519 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.519 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3092579' 00:06:15.519 killing process with pid 3092579 00:06:15.519 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3092579 00:06:15.519 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3092579 00:06:16.089 00:06:16.089 real 0m1.018s 00:06:16.089 user 0m0.922s 00:06:16.089 sys 0m0.439s 00:06:16.089 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.089 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.089 ************************************ 00:06:16.089 END TEST dpdk_mem_utility 00:06:16.089 ************************************ 00:06:16.089 22:13:35 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:16.089 22:13:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.089 22:13:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.089 22:13:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.089 ************************************ 00:06:16.089 START TEST event 00:06:16.089 ************************************ 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:16.089 * Looking for test storage... 00:06:16.089 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.089 22:13:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.089 22:13:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.089 22:13:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.089 22:13:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.089 22:13:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.089 22:13:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.089 22:13:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.089 22:13:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.089 22:13:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.089 22:13:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.089 22:13:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.089 22:13:35 event -- scripts/common.sh@344 -- # case "$op" in 00:06:16.089 22:13:35 event -- scripts/common.sh@345 -- # : 1 00:06:16.089 22:13:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.089 22:13:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.089 22:13:35 event -- scripts/common.sh@365 -- # decimal 1 00:06:16.089 22:13:35 event -- scripts/common.sh@353 -- # local d=1 00:06:16.089 22:13:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.089 22:13:35 event -- scripts/common.sh@355 -- # echo 1 00:06:16.089 22:13:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.089 22:13:35 event -- scripts/common.sh@366 -- # decimal 2 00:06:16.089 22:13:35 event -- scripts/common.sh@353 -- # local d=2 00:06:16.089 22:13:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.089 22:13:35 event -- scripts/common.sh@355 -- # echo 2 00:06:16.089 22:13:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.089 22:13:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.089 22:13:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.089 22:13:35 event -- scripts/common.sh@368 -- # return 0 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.089 --rc genhtml_branch_coverage=1 00:06:16.089 --rc genhtml_function_coverage=1 00:06:16.089 --rc genhtml_legend=1 00:06:16.089 --rc geninfo_all_blocks=1 00:06:16.089 --rc geninfo_unexecuted_blocks=1 00:06:16.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.089 ' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.089 --rc genhtml_branch_coverage=1 00:06:16.089 --rc genhtml_function_coverage=1 00:06:16.089 --rc genhtml_legend=1 00:06:16.089 --rc geninfo_all_blocks=1 00:06:16.089 --rc geninfo_unexecuted_blocks=1 00:06:16.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.089 ' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.089 --rc genhtml_branch_coverage=1 00:06:16.089 --rc genhtml_function_coverage=1 00:06:16.089 --rc genhtml_legend=1 00:06:16.089 --rc geninfo_all_blocks=1 00:06:16.089 --rc geninfo_unexecuted_blocks=1 00:06:16.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.089 ' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.089 --rc genhtml_branch_coverage=1 00:06:16.089 --rc genhtml_function_coverage=1 00:06:16.089 --rc genhtml_legend=1 00:06:16.089 --rc geninfo_all_blocks=1 00:06:16.089 --rc geninfo_unexecuted_blocks=1 00:06:16.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.089 ' 00:06:16.089 22:13:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:16.089 22:13:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.089 22:13:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:16.089 22:13:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.089 22:13:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.089 ************************************ 00:06:16.089 START TEST event_perf 00:06:16.089 ************************************ 00:06:16.089 22:13:35 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.349 Running I/O for 1 seconds...[2024-10-29 22:13:35.630514] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:16.349 [2024-10-29 22:13:35.630596] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092815 ] 00:06:16.349 [2024-10-29 22:13:35.719907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.349 [2024-10-29 22:13:35.767861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.349 [2024-10-29 22:13:35.767965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.349 [2024-10-29 22:13:35.768064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.349 [2024-10-29 22:13:35.768066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.286 Running I/O for 1 seconds... 00:06:17.286 lcore 0: 192798 00:06:17.286 lcore 1: 192796 00:06:17.286 lcore 2: 192798 00:06:17.286 lcore 3: 192797 00:06:17.286 done. 00:06:17.546 00:06:17.546 real 0m1.200s 00:06:17.546 user 0m4.093s 00:06:17.546 sys 0m0.103s 00:06:17.546 22:13:36 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.546 22:13:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.546 ************************************ 00:06:17.546 END TEST event_perf 00:06:17.546 ************************************ 00:06:17.546 22:13:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.546 22:13:36 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:17.546 22:13:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.546 22:13:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.546 ************************************ 00:06:17.546 START TEST event_reactor 00:06:17.546 ************************************ 00:06:17.546 22:13:36 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.546 [2024-10-29 22:13:36.913203] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:17.546 [2024-10-29 22:13:36.913284] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093015 ] 00:06:17.546 [2024-10-29 22:13:37.003682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.546 [2024-10-29 22:13:37.048595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.923 test_start 00:06:18.923 oneshot 00:06:18.923 tick 100 00:06:18.923 tick 100 00:06:18.923 tick 250 00:06:18.923 tick 100 00:06:18.923 tick 100 00:06:18.923 tick 100 00:06:18.923 tick 250 00:06:18.923 tick 500 00:06:18.923 tick 100 00:06:18.923 tick 100 00:06:18.923 tick 250 00:06:18.923 tick 100 00:06:18.923 tick 100 00:06:18.923 test_end 00:06:18.923 00:06:18.923 real 0m1.193s 00:06:18.923 user 0m1.096s 00:06:18.923 sys 0m0.093s 00:06:18.923 22:13:38 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.923 22:13:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:18.923 ************************************ 00:06:18.923 END TEST event_reactor 00:06:18.923 ************************************ 00:06:18.924 22:13:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.924 22:13:38 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:18.924 22:13:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.924 22:13:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.924 ************************************ 00:06:18.924 START TEST event_reactor_perf 00:06:18.924 ************************************ 00:06:18.924 22:13:38 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.924 [2024-10-29 22:13:38.181672] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:18.924 [2024-10-29 22:13:38.181768] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093210 ] 00:06:18.924 [2024-10-29 22:13:38.269429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.924 [2024-10-29 22:13:38.313735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.862 test_start 00:06:19.862 test_end 00:06:19.862 Performance: 967916 events per second 00:06:19.862 00:06:19.862 real 0m1.189s 00:06:19.862 user 0m1.091s 00:06:19.862 sys 0m0.094s 00:06:19.862 22:13:39 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.862 22:13:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.862 ************************************ 00:06:19.862 END TEST event_reactor_perf 00:06:19.862 ************************************ 00:06:20.121 22:13:39 event -- event/event.sh@49 -- # uname -s 00:06:20.121 22:13:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.121 22:13:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.121 22:13:39 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.121 22:13:39 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.121 22:13:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.121 ************************************ 00:06:20.121 START TEST event_scheduler 00:06:20.121 ************************************ 00:06:20.121 22:13:39 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.121 * Looking for test storage... 00:06:20.121 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:20.121 22:13:39 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.121 22:13:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.121 22:13:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.121 22:13:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.121 22:13:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.121 22:13:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.121 22:13:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.122 22:13:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.122 --rc genhtml_branch_coverage=1 00:06:20.122 --rc genhtml_function_coverage=1 00:06:20.122 --rc genhtml_legend=1 00:06:20.122 --rc geninfo_all_blocks=1 00:06:20.122 --rc geninfo_unexecuted_blocks=1 00:06:20.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.122 ' 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.122 --rc genhtml_branch_coverage=1 00:06:20.122 --rc genhtml_function_coverage=1 00:06:20.122 --rc genhtml_legend=1 00:06:20.122 --rc geninfo_all_blocks=1 00:06:20.122 --rc geninfo_unexecuted_blocks=1 00:06:20.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.122 ' 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.122 --rc genhtml_branch_coverage=1 00:06:20.122 --rc genhtml_function_coverage=1 00:06:20.122 --rc genhtml_legend=1 00:06:20.122 --rc geninfo_all_blocks=1 00:06:20.122 --rc geninfo_unexecuted_blocks=1 00:06:20.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.122 ' 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.122 --rc genhtml_branch_coverage=1 00:06:20.122 --rc genhtml_function_coverage=1 00:06:20.122 --rc genhtml_legend=1 00:06:20.122 --rc geninfo_all_blocks=1 00:06:20.122 --rc geninfo_unexecuted_blocks=1 00:06:20.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.122 ' 00:06:20.122 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.122 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3093443 00:06:20.122 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.122 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.122 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3093443 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3093443 ']' 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.122 22:13:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.381 [2024-10-29 22:13:39.663852] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:20.381 [2024-10-29 22:13:39.663941] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093443 ] 00:06:20.381 [2024-10-29 22:13:39.749200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.381 [2024-10-29 22:13:39.799242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.381 [2024-10-29 22:13:39.799353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.381 [2024-10-29 22:13:39.799396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.381 [2024-10-29 22:13:39.799398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:20.381 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.381 [2024-10-29 22:13:39.848206] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:20.381 [2024-10-29 22:13:39.848227] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:20.381 [2024-10-29 22:13:39.848239] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:20.381 [2024-10-29 22:13:39.848247] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:20.381 [2024-10-29 22:13:39.848254] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.381 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.381 22:13:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 [2024-10-29 22:13:39.922172] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:20.640 22:13:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:20.640 22:13:39 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.640 22:13:39 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.640 22:13:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 ************************************ 00:06:20.640 START TEST scheduler_create_thread 00:06:20.640 ************************************ 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 2 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 3 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 4 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 5 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 6 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 7 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 8 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.640 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.641 9 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.641 10 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.641 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.577 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.577 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.577 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.577 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.956 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.957 22:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:22.957 22:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:22.957 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.957 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.895 22:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.895 00:06:23.895 real 0m3.383s 00:06:23.895 user 0m0.025s 00:06:23.895 sys 0m0.006s 00:06:23.895 22:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.895 22:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.895 ************************************ 00:06:23.895 END TEST scheduler_create_thread 00:06:23.895 ************************************ 00:06:23.895 22:13:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:23.895 22:13:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3093443 00:06:23.895 22:13:43 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3093443 ']' 00:06:23.895 22:13:43 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3093443 00:06:23.895 22:13:43 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:23.895 22:13:43 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.895 22:13:43 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3093443 00:06:24.154 22:13:43 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:24.154 22:13:43 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:24.154 22:13:43 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3093443' 00:06:24.154 killing process with pid 3093443 00:06:24.154 22:13:43 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3093443 00:06:24.154 22:13:43 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3093443 00:06:24.414 [2024-10-29 22:13:43.722424] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.414 00:06:24.414 real 0m4.483s 00:06:24.414 user 0m7.781s 00:06:24.414 sys 0m0.458s 00:06:24.414 22:13:43 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.414 22:13:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.414 ************************************ 00:06:24.414 END TEST event_scheduler 00:06:24.414 ************************************ 00:06:24.674 22:13:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.674 22:13:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.674 22:13:43 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.674 22:13:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.674 22:13:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 ************************************ 00:06:24.674 START TEST app_repeat 00:06:24.674 ************************************ 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3094022 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3094022' 00:06:24.674 Process app_repeat pid: 3094022 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.674 spdk_app_start Round 0 00:06:24.674 22:13:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3094022 /var/tmp/spdk-nbd.sock 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3094022 ']' 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.674 22:13:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 [2024-10-29 22:13:44.032425] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:24.674 [2024-10-29 22:13:44.032509] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094022 ] 00:06:24.674 [2024-10-29 22:13:44.118332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.674 [2024-10-29 22:13:44.163521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.674 [2024-10-29 22:13:44.163521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.934 22:13:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.934 22:13:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:24.934 22:13:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.934 Malloc0 00:06:25.193 22:13:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.193 Malloc1 00:06:25.193 22:13:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.193 22:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.453 /dev/nbd0 00:06:25.453 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.453 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.453 1+0 records in 00:06:25.453 1+0 records out 00:06:25.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025917 s, 15.8 MB/s 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:25.453 22:13:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:25.453 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.453 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.453 22:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.712 /dev/nbd1 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.712 1+0 records in 00:06:25.712 1+0 records out 00:06:25.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025845 s, 15.8 MB/s 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:25.712 22:13:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.712 22:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.971 22:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.971 { 00:06:25.971 "nbd_device": "/dev/nbd0", 00:06:25.971 "bdev_name": "Malloc0" 00:06:25.971 }, 00:06:25.971 { 00:06:25.971 "nbd_device": "/dev/nbd1", 00:06:25.971 "bdev_name": "Malloc1" 00:06:25.971 } 00:06:25.971 ]' 00:06:25.971 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.971 { 00:06:25.971 "nbd_device": "/dev/nbd0", 00:06:25.971 "bdev_name": "Malloc0" 00:06:25.971 }, 00:06:25.971 { 00:06:25.971 "nbd_device": "/dev/nbd1", 00:06:25.971 "bdev_name": "Malloc1" 00:06:25.972 } 00:06:25.972 ]' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.972 /dev/nbd1' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.972 /dev/nbd1' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.972 256+0 records in 00:06:25.972 256+0 records out 00:06:25.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107205 s, 97.8 MB/s 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.972 22:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.232 256+0 records in 00:06:26.232 256+0 records out 00:06:26.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202907 s, 51.7 MB/s 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.232 256+0 records in 00:06:26.232 256+0 records out 00:06:26.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220662 s, 47.5 MB/s 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.232 22:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.492 22:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.751 22:13:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.751 22:13:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.011 22:13:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.270 [2024-10-29 22:13:46.599214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.270 [2024-10-29 22:13:46.642093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.270 [2024-10-29 22:13:46.642093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.270 [2024-10-29 22:13:46.682735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.270 [2024-10-29 22:13:46.682780] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.564 22:13:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.564 22:13:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:30.564 spdk_app_start Round 1 00:06:30.564 22:13:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3094022 /var/tmp/spdk-nbd.sock 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3094022 ']' 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.564 22:13:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:30.564 22:13:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.564 Malloc0 00:06:30.564 22:13:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.564 Malloc1 00:06:30.823 22:13:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.823 /dev/nbd0 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.823 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.823 1+0 records in 00:06:30.823 1+0 records out 00:06:30.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024021 s, 17.1 MB/s 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:30.823 22:13:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.082 /dev/nbd1 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.082 1+0 records in 00:06:31.082 1+0 records out 00:06:31.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266716 s, 15.4 MB/s 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.082 22:13:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.082 22:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.341 { 00:06:31.341 "nbd_device": "/dev/nbd0", 00:06:31.341 "bdev_name": "Malloc0" 00:06:31.341 }, 00:06:31.341 { 00:06:31.341 "nbd_device": "/dev/nbd1", 00:06:31.341 "bdev_name": "Malloc1" 00:06:31.341 } 00:06:31.341 ]' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.341 { 00:06:31.341 "nbd_device": "/dev/nbd0", 00:06:31.341 "bdev_name": "Malloc0" 00:06:31.341 }, 00:06:31.341 { 00:06:31.341 "nbd_device": "/dev/nbd1", 00:06:31.341 "bdev_name": "Malloc1" 00:06:31.341 } 00:06:31.341 ]' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.341 /dev/nbd1' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.341 /dev/nbd1' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.341 22:13:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.341 256+0 records in 00:06:31.341 256+0 records out 00:06:31.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115651 s, 90.7 MB/s 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.601 256+0 records in 00:06:31.601 256+0 records out 00:06:31.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204186 s, 51.4 MB/s 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.601 256+0 records in 00:06:31.601 256+0 records out 00:06:31.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219346 s, 47.8 MB/s 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.601 22:13:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.860 22:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.861 22:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.119 22:13:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.119 22:13:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.378 22:13:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.637 [2024-10-29 22:13:51.973667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.637 [2024-10-29 22:13:52.016925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.637 [2024-10-29 22:13:52.016938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.637 [2024-10-29 22:13:52.058971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.637 [2024-10-29 22:13:52.059015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.926 22:13:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.926 22:13:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:35.926 spdk_app_start Round 2 00:06:35.926 22:13:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3094022 /var/tmp/spdk-nbd.sock 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3094022 ']' 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.926 22:13:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.926 22:13:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.926 22:13:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:35.926 22:13:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.926 Malloc0 00:06:35.926 22:13:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.186 Malloc1 00:06:36.186 22:13:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.186 /dev/nbd0 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.186 22:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.186 22:13:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:36.186 22:13:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:36.186 22:13:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.186 22:13:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.186 22:13:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.446 1+0 records in 00:06:36.446 1+0 records out 00:06:36.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222948 s, 18.4 MB/s 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.446 /dev/nbd1 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.446 1+0 records in 00:06:36.446 1+0 records out 00:06:36.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279978 s, 14.6 MB/s 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.446 22:13:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.446 22:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.705 22:13:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.705 22:13:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.705 22:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.705 { 00:06:36.705 "nbd_device": "/dev/nbd0", 00:06:36.705 "bdev_name": "Malloc0" 00:06:36.705 }, 00:06:36.705 { 00:06:36.705 "nbd_device": "/dev/nbd1", 00:06:36.705 "bdev_name": "Malloc1" 00:06:36.705 } 00:06:36.705 ]' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.705 { 00:06:36.705 "nbd_device": "/dev/nbd0", 00:06:36.705 "bdev_name": "Malloc0" 00:06:36.705 }, 00:06:36.705 { 00:06:36.705 "nbd_device": "/dev/nbd1", 00:06:36.705 "bdev_name": "Malloc1" 00:06:36.705 } 00:06:36.705 ]' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.705 /dev/nbd1' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.705 /dev/nbd1' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.705 22:13:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.964 256+0 records in 00:06:36.964 256+0 records out 00:06:36.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115512 s, 90.8 MB/s 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.964 256+0 records in 00:06:36.964 256+0 records out 00:06:36.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203687 s, 51.5 MB/s 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.964 256+0 records in 00:06:36.964 256+0 records out 00:06:36.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221291 s, 47.4 MB/s 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.964 22:13:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.965 22:13:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.224 22:13:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.483 22:13:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.484 22:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.484 22:13:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.744 22:13:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.744 22:13:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.744 22:13:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.004 [2024-10-29 22:13:57.374224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.004 [2024-10-29 22:13:57.418359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.004 [2024-10-29 22:13:57.418360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.004 [2024-10-29 22:13:57.458308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.004 [2024-10-29 22:13:57.458350] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.298 22:14:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3094022 /var/tmp/spdk-nbd.sock 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3094022 ']' 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:41.298 22:14:00 event.app_repeat -- event/event.sh@39 -- # killprocess 3094022 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3094022 ']' 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3094022 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3094022 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3094022' 00:06:41.298 killing process with pid 3094022 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3094022 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3094022 00:06:41.298 spdk_app_start is called in Round 0. 00:06:41.298 Shutdown signal received, stop current app iteration 00:06:41.298 Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 reinitialization... 00:06:41.298 spdk_app_start is called in Round 1. 00:06:41.298 Shutdown signal received, stop current app iteration 00:06:41.298 Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 reinitialization... 00:06:41.298 spdk_app_start is called in Round 2. 00:06:41.298 Shutdown signal received, stop current app iteration 00:06:41.298 Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 reinitialization... 00:06:41.298 spdk_app_start is called in Round 3. 00:06:41.298 Shutdown signal received, stop current app iteration 00:06:41.298 22:14:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.298 22:14:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.298 00:06:41.298 real 0m16.634s 00:06:41.298 user 0m35.983s 00:06:41.298 sys 0m3.254s 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.298 22:14:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.298 ************************************ 00:06:41.298 END TEST app_repeat 00:06:41.298 ************************************ 00:06:41.298 22:14:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.298 22:14:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.298 22:14:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.298 22:14:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.298 22:14:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.298 ************************************ 00:06:41.298 START TEST cpu_locks 00:06:41.298 ************************************ 00:06:41.298 22:14:00 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.556 * Looking for test storage... 00:06:41.556 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.556 22:14:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.556 ' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.556 ' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.556 ' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.556 ' 00:06:41.556 22:14:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.556 22:14:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.556 22:14:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.556 22:14:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.556 22:14:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.556 ************************************ 00:06:41.557 START TEST default_locks 00:06:41.557 ************************************ 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3096535 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3096535 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3096535 ']' 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.557 22:14:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.557 [2024-10-29 22:14:00.983202] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:41.557 [2024-10-29 22:14:00.983262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096535 ] 00:06:41.557 [2024-10-29 22:14:01.066477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.815 [2024-10-29 22:14:01.114705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.815 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.815 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:41.815 22:14:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3096535 00:06:41.815 22:14:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3096535 00:06:41.815 22:14:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.382 lslocks: write error 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3096535 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3096535 ']' 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3096535 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3096535 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3096535' 00:06:42.382 killing process with pid 3096535 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3096535 00:06:42.382 22:14:01 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3096535 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3096535 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3096535 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3096535 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3096535 ']' 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.950 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3096535) - No such process 00:06:42.950 ERROR: process (pid: 3096535) is no longer running 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.950 00:06:42.950 real 0m1.241s 00:06:42.950 user 0m1.187s 00:06:42.950 sys 0m0.595s 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.950 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.950 ************************************ 00:06:42.950 END TEST default_locks 00:06:42.950 ************************************ 00:06:42.951 22:14:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.951 22:14:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:42.951 22:14:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.951 22:14:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 ************************************ 00:06:42.951 START TEST default_locks_via_rpc 00:06:42.951 ************************************ 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3096740 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3096740 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3096740 ']' 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.951 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 [2024-10-29 22:14:02.309851] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:42.951 [2024-10-29 22:14:02.309914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096740 ] 00:06:42.951 [2024-10-29 22:14:02.395829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.951 [2024-10-29 22:14:02.441258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3096740 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3096740 00:06:43.209 22:14:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3096740 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3096740 ']' 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3096740 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3096740 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3096740' 00:06:43.777 killing process with pid 3096740 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3096740 00:06:43.777 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3096740 00:06:44.035 00:06:44.035 real 0m1.268s 00:06:44.035 user 0m1.243s 00:06:44.035 sys 0m0.573s 00:06:44.035 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.035 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.035 ************************************ 00:06:44.035 END TEST default_locks_via_rpc 00:06:44.035 ************************************ 00:06:44.294 22:14:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.294 22:14:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.294 22:14:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.294 22:14:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.294 ************************************ 00:06:44.294 START TEST non_locking_app_on_locked_coremask 00:06:44.294 ************************************ 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3096940 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3096940 /var/tmp/spdk.sock 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3096940 ']' 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.294 22:14:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.294 [2024-10-29 22:14:03.656587] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:44.294 [2024-10-29 22:14:03.656647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096940 ] 00:06:44.294 [2024-10-29 22:14:03.739874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.294 [2024-10-29 22:14:03.787023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3096949 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3096949 /var/tmp/spdk2.sock 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3096949 ']' 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.553 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.553 [2024-10-29 22:14:04.028231] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:44.553 [2024-10-29 22:14:04.028315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096949 ] 00:06:44.811 [2024-10-29 22:14:04.123167] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.811 [2024-10-29 22:14:04.123194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.811 [2024-10-29 22:14:04.221248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.377 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.377 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:45.377 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3096940 00:06:45.377 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3096940 00:06:45.635 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.570 lslocks: write error 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3096940 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3096940 ']' 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3096940 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3096940 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3096940' 00:06:46.570 killing process with pid 3096940 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3096940 00:06:46.570 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3096940 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3096949 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3096949 ']' 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3096949 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3096949 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3096949' 00:06:47.136 killing process with pid 3096949 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3096949 00:06:47.136 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3096949 00:06:47.395 00:06:47.395 real 0m3.104s 00:06:47.395 user 0m3.267s 00:06:47.395 sys 0m1.146s 00:06:47.395 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.395 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.395 ************************************ 00:06:47.395 END TEST non_locking_app_on_locked_coremask 00:06:47.395 ************************************ 00:06:47.395 22:14:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.395 22:14:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.395 22:14:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.395 22:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.395 ************************************ 00:06:47.395 START TEST locking_app_on_unlocked_coremask 00:06:47.395 ************************************ 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3097339 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3097339 /var/tmp/spdk.sock 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3097339 ']' 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.395 22:14:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.395 [2024-10-29 22:14:06.848854] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:47.395 [2024-10-29 22:14:06.848944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097339 ] 00:06:47.654 [2024-10-29 22:14:06.936571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.654 [2024-10-29 22:14:06.936601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.654 [2024-10-29 22:14:06.983291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3097382 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3097382 /var/tmp/spdk2.sock 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3097382 ']' 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.913 22:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 [2024-10-29 22:14:07.233531] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:47.913 [2024-10-29 22:14:07.233598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097382 ] 00:06:47.913 [2024-10-29 22:14:07.337571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.913 [2024-10-29 22:14:07.426608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.848 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.848 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.848 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3097382 00:06:48.848 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3097382 00:06:48.848 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.782 lslocks: write error 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3097339 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3097339 ']' 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3097339 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.782 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3097339 00:06:50.040 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.040 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.040 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3097339' 00:06:50.040 killing process with pid 3097339 00:06:50.040 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3097339 00:06:50.040 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3097339 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3097382 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3097382 ']' 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3097382 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3097382 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3097382' 00:06:50.606 killing process with pid 3097382 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3097382 00:06:50.606 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3097382 00:06:50.866 00:06:50.866 real 0m3.471s 00:06:50.866 user 0m3.646s 00:06:50.866 sys 0m1.266s 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.866 ************************************ 00:06:50.866 END TEST locking_app_on_unlocked_coremask 00:06:50.866 ************************************ 00:06:50.866 22:14:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.866 22:14:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.866 22:14:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.866 22:14:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.866 ************************************ 00:06:50.866 START TEST locking_app_on_locked_coremask 00:06:50.866 ************************************ 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3097894 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3097894 /var/tmp/spdk.sock 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3097894 ']' 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.866 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.125 [2024-10-29 22:14:10.401439] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:51.125 [2024-10-29 22:14:10.401502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097894 ] 00:06:51.125 [2024-10-29 22:14:10.489285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.125 [2024-10-29 22:14:10.536748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3097905 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3097905 /var/tmp/spdk2.sock 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3097905 /var/tmp/spdk2.sock 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3097905 /var/tmp/spdk2.sock 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3097905 ']' 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.383 22:14:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.384 [2024-10-29 22:14:10.795568] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:51.384 [2024-10-29 22:14:10.795634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097905 ] 00:06:51.384 [2024-10-29 22:14:10.891642] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3097894 has claimed it. 00:06:51.384 [2024-10-29 22:14:10.891685] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.949 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3097905) - No such process 00:06:51.949 ERROR: process (pid: 3097905) is no longer running 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3097894 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3097894 00:06:51.949 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.881 lslocks: write error 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3097894 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3097894 ']' 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3097894 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3097894 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3097894' 00:06:52.881 killing process with pid 3097894 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3097894 00:06:52.881 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3097894 00:06:53.139 00:06:53.139 real 0m2.046s 00:06:53.139 user 0m2.163s 00:06:53.139 sys 0m0.787s 00:06:53.139 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.139 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 ************************************ 00:06:53.139 END TEST locking_app_on_locked_coremask 00:06:53.139 ************************************ 00:06:53.139 22:14:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.139 22:14:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.139 22:14:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.139 22:14:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 ************************************ 00:06:53.139 START TEST locking_overlapped_coremask 00:06:53.139 ************************************ 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3098120 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3098120 /var/tmp/spdk.sock 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3098120 ']' 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.139 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 [2024-10-29 22:14:12.530191] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:53.139 [2024-10-29 22:14:12.530254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098120 ] 00:06:53.139 [2024-10-29 22:14:12.616831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.397 [2024-10-29 22:14:12.664847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.397 [2024-10-29 22:14:12.664872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.397 [2024-10-29 22:14:12.664874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3098282 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3098282 /var/tmp/spdk2.sock 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3098282 /var/tmp/spdk2.sock 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3098282 /var/tmp/spdk2.sock 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3098282 ']' 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.398 22:14:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.398 [2024-10-29 22:14:12.903937] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:53.398 [2024-10-29 22:14:12.903994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098282 ] 00:06:53.656 [2024-10-29 22:14:13.003610] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3098120 has claimed it. 00:06:53.656 [2024-10-29 22:14:13.003651] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.223 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3098282) - No such process 00:06:54.223 ERROR: process (pid: 3098282) is no longer running 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3098120 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3098120 ']' 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3098120 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3098120 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3098120' 00:06:54.223 killing process with pid 3098120 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3098120 00:06:54.223 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3098120 00:06:54.483 00:06:54.483 real 0m1.442s 00:06:54.483 user 0m3.942s 00:06:54.483 sys 0m0.439s 00:06:54.483 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.483 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.483 ************************************ 00:06:54.483 END TEST locking_overlapped_coremask 00:06:54.483 ************************************ 00:06:54.483 22:14:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:54.483 22:14:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.483 22:14:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.483 22:14:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.742 ************************************ 00:06:54.742 START TEST locking_overlapped_coremask_via_rpc 00:06:54.742 ************************************ 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3098442 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3098442 /var/tmp/spdk.sock 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3098442 ']' 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.743 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.743 [2024-10-29 22:14:14.057624] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:54.743 [2024-10-29 22:14:14.057696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098442 ] 00:06:54.743 [2024-10-29 22:14:14.141734] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.743 [2024-10-29 22:14:14.141763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.743 [2024-10-29 22:14:14.192553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.743 [2024-10-29 22:14:14.192653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.743 [2024-10-29 22:14:14.192654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3098512 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3098512 /var/tmp/spdk2.sock 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3098512 ']' 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.002 22:14:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.002 [2024-10-29 22:14:14.446640] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:55.002 [2024-10-29 22:14:14.446704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098512 ] 00:06:55.261 [2024-10-29 22:14:14.547229] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.261 [2024-10-29 22:14:14.547262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.261 [2024-10-29 22:14:14.638242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.261 [2024-10-29 22:14:14.641347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.261 [2024-10-29 22:14:14.641350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.829 [2024-10-29 22:14:15.331363] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3098442 has claimed it. 00:06:55.829 request: 00:06:55.829 { 00:06:55.829 "method": "framework_enable_cpumask_locks", 00:06:55.829 "req_id": 1 00:06:55.829 } 00:06:55.829 Got JSON-RPC error response 00:06:55.829 response: 00:06:55.829 { 00:06:55.829 "code": -32603, 00:06:55.829 "message": "Failed to claim CPU core: 2" 00:06:55.829 } 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3098442 /var/tmp/spdk.sock 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3098442 ']' 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.829 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3098512 /var/tmp/spdk2.sock 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3098512 ']' 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.088 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.347 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.347 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.348 00:06:56.348 real 0m1.729s 00:06:56.348 user 0m0.801s 00:06:56.348 sys 0m0.184s 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.348 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.348 ************************************ 00:06:56.348 END TEST locking_overlapped_coremask_via_rpc 00:06:56.348 ************************************ 00:06:56.348 22:14:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.348 22:14:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3098442 ]] 00:06:56.348 22:14:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3098442 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3098442 ']' 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3098442 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3098442 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3098442' 00:06:56.348 killing process with pid 3098442 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3098442 00:06:56.348 22:14:15 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3098442 00:06:56.917 22:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3098512 ]] 00:06:56.918 22:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3098512 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3098512 ']' 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3098512 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3098512 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3098512' 00:06:56.918 killing process with pid 3098512 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3098512 00:06:56.918 22:14:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3098512 00:06:57.177 22:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3098442 ]] 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3098442 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3098442 ']' 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3098442 00:06:57.178 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3098442) - No such process 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3098442 is not found' 00:06:57.178 Process with pid 3098442 is not found 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3098512 ]] 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3098512 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3098512 ']' 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3098512 00:06:57.178 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3098512) - No such process 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3098512 is not found' 00:06:57.178 Process with pid 3098512 is not found 00:06:57.178 22:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.178 00:06:57.178 real 0m15.797s 00:06:57.178 user 0m26.127s 00:06:57.178 sys 0m6.082s 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.178 22:14:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.178 ************************************ 00:06:57.178 END TEST cpu_locks 00:06:57.178 ************************************ 00:06:57.178 00:06:57.178 real 0m41.174s 00:06:57.178 user 1m16.464s 00:06:57.178 sys 0m10.526s 00:06:57.178 22:14:16 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.178 22:14:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.178 ************************************ 00:06:57.178 END TEST event 00:06:57.178 ************************************ 00:06:57.178 22:14:16 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:57.178 22:14:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.178 22:14:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.178 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.178 ************************************ 00:06:57.178 START TEST thread 00:06:57.178 ************************************ 00:06:57.178 22:14:16 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:57.437 * Looking for test storage... 00:06:57.437 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:57.437 22:14:16 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.437 22:14:16 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.437 22:14:16 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.437 22:14:16 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.437 22:14:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.437 22:14:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.437 22:14:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.437 22:14:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.437 22:14:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.437 22:14:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.437 22:14:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.437 22:14:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.437 22:14:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.437 22:14:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.437 22:14:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.437 22:14:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.437 22:14:16 thread -- scripts/common.sh@345 -- # : 1 00:06:57.438 22:14:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.438 22:14:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.438 22:14:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.438 22:14:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.438 22:14:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.438 22:14:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.438 22:14:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.438 22:14:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.438 22:14:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.438 22:14:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.438 22:14:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.438 22:14:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.438 22:14:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.438 22:14:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.438 22:14:16 thread -- scripts/common.sh@368 -- # return 0 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.438 --rc genhtml_branch_coverage=1 00:06:57.438 --rc genhtml_function_coverage=1 00:06:57.438 --rc genhtml_legend=1 00:06:57.438 --rc geninfo_all_blocks=1 00:06:57.438 --rc geninfo_unexecuted_blocks=1 00:06:57.438 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.438 ' 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.438 --rc genhtml_branch_coverage=1 00:06:57.438 --rc genhtml_function_coverage=1 00:06:57.438 --rc genhtml_legend=1 00:06:57.438 --rc geninfo_all_blocks=1 00:06:57.438 --rc geninfo_unexecuted_blocks=1 00:06:57.438 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.438 ' 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.438 --rc genhtml_branch_coverage=1 00:06:57.438 --rc genhtml_function_coverage=1 00:06:57.438 --rc genhtml_legend=1 00:06:57.438 --rc geninfo_all_blocks=1 00:06:57.438 --rc geninfo_unexecuted_blocks=1 00:06:57.438 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.438 ' 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.438 --rc genhtml_branch_coverage=1 00:06:57.438 --rc genhtml_function_coverage=1 00:06:57.438 --rc genhtml_legend=1 00:06:57.438 --rc geninfo_all_blocks=1 00:06:57.438 --rc geninfo_unexecuted_blocks=1 00:06:57.438 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.438 ' 00:06:57.438 22:14:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.438 22:14:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.438 ************************************ 00:06:57.438 START TEST thread_poller_perf 00:06:57.438 ************************************ 00:06:57.438 22:14:16 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.438 [2024-10-29 22:14:16.895182] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:57.438 [2024-10-29 22:14:16.895264] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098963 ] 00:06:57.696 [2024-10-29 22:14:16.984844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.696 [2024-10-29 22:14:17.029375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.696 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:58.633 [2024-10-29T21:14:18.157Z] ====================================== 00:06:58.633 [2024-10-29T21:14:18.157Z] busy:2303997370 (cyc) 00:06:58.633 [2024-10-29T21:14:18.157Z] total_run_count: 824000 00:06:58.633 [2024-10-29T21:14:18.157Z] tsc_hz: 2300000000 (cyc) 00:06:58.633 [2024-10-29T21:14:18.157Z] ====================================== 00:06:58.633 [2024-10-29T21:14:18.157Z] poller_cost: 2796 (cyc), 1215 (nsec) 00:06:58.633 00:06:58.633 real 0m1.198s 00:06:58.633 user 0m1.098s 00:06:58.633 sys 0m0.095s 00:06:58.633 22:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.633 22:14:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.633 ************************************ 00:06:58.633 END TEST thread_poller_perf 00:06:58.633 ************************************ 00:06:58.633 22:14:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.633 22:14:18 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:58.633 22:14:18 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.633 22:14:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.893 ************************************ 00:06:58.893 START TEST thread_poller_perf 00:06:58.893 ************************************ 00:06:58.893 22:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.893 [2024-10-29 22:14:18.179989] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:06:58.893 [2024-10-29 22:14:18.180074] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099162 ] 00:06:58.893 [2024-10-29 22:14:18.268736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.893 [2024-10-29 22:14:18.315543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.893 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.011 [2024-10-29T21:14:19.535Z] ====================================== 00:07:00.011 [2024-10-29T21:14:19.535Z] busy:2301253884 (cyc) 00:07:00.011 [2024-10-29T21:14:19.535Z] total_run_count: 12793000 00:07:00.011 [2024-10-29T21:14:19.535Z] tsc_hz: 2300000000 (cyc) 00:07:00.011 [2024-10-29T21:14:19.535Z] ====================================== 00:07:00.011 [2024-10-29T21:14:19.535Z] poller_cost: 179 (cyc), 77 (nsec) 00:07:00.011 00:07:00.011 real 0m1.195s 00:07:00.011 user 0m1.101s 00:07:00.011 sys 0m0.091s 00:07:00.011 22:14:19 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.011 22:14:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.011 ************************************ 00:07:00.011 END TEST thread_poller_perf 00:07:00.011 ************************************ 00:07:00.011 22:14:19 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:00.011 22:14:19 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:00.011 22:14:19 thread -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.011 22:14:19 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.011 22:14:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.011 ************************************ 00:07:00.011 START TEST thread_spdk_lock 00:07:00.011 ************************************ 00:07:00.011 22:14:19 thread.thread_spdk_lock -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:00.011 [2024-10-29 22:14:19.457928] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:00.011 [2024-10-29 22:14:19.458009] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099331 ] 00:07:00.287 [2024-10-29 22:14:19.548562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.287 [2024-10-29 22:14:19.593993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.287 [2024-10-29 22:14:19.593993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.856 [2024-10-29 22:14:20.085962] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:00.856 [2024-10-29 22:14:20.086005] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:00.856 [2024-10-29 22:14:20.086016] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2c80 00:07:00.856 [2024-10-29 22:14:20.086774] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:00.856 [2024-10-29 22:14:20.086877] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:00.856 [2024-10-29 22:14:20.086898] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:00.856 Starting test contend 00:07:00.856 Worker Delay Wait us Hold us Total us 00:07:00.856 0 3 166000 185576 351577 00:07:00.856 1 5 84258 287325 371583 00:07:00.856 PASS test contend 00:07:00.856 Starting test hold_by_poller 00:07:00.856 PASS test hold_by_poller 00:07:00.856 Starting test hold_by_message 00:07:00.856 PASS test hold_by_message 00:07:00.856 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:00.856 100014 assertions passed 00:07:00.856 0 assertions failed 00:07:00.856 00:07:00.856 real 0m0.687s 00:07:00.856 user 0m1.089s 00:07:00.856 sys 0m0.088s 00:07:00.856 22:14:20 thread.thread_spdk_lock -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.856 22:14:20 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:00.856 ************************************ 00:07:00.856 END TEST thread_spdk_lock 00:07:00.856 ************************************ 00:07:00.856 00:07:00.856 real 0m3.526s 00:07:00.856 user 0m3.477s 00:07:00.856 sys 0m0.567s 00:07:00.856 22:14:20 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.856 22:14:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.856 ************************************ 00:07:00.856 END TEST thread 00:07:00.856 ************************************ 00:07:00.856 22:14:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:00.856 22:14:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.856 22:14:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.856 22:14:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.856 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:07:00.856 ************************************ 00:07:00.856 START TEST app_cmdline 00:07:00.856 ************************************ 00:07:00.856 22:14:20 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.856 * Looking for test storage... 00:07:00.856 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:00.856 22:14:20 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.856 22:14:20 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.856 22:14:20 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.116 22:14:20 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.116 22:14:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.116 22:14:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.116 22:14:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.117 22:14:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.117 --rc genhtml_branch_coverage=1 00:07:01.117 --rc genhtml_function_coverage=1 00:07:01.117 --rc genhtml_legend=1 00:07:01.117 --rc geninfo_all_blocks=1 00:07:01.117 --rc geninfo_unexecuted_blocks=1 00:07:01.117 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.117 ' 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.117 --rc genhtml_branch_coverage=1 00:07:01.117 --rc genhtml_function_coverage=1 00:07:01.117 --rc genhtml_legend=1 00:07:01.117 --rc geninfo_all_blocks=1 00:07:01.117 --rc geninfo_unexecuted_blocks=1 00:07:01.117 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.117 ' 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.117 --rc genhtml_branch_coverage=1 00:07:01.117 --rc genhtml_function_coverage=1 00:07:01.117 --rc genhtml_legend=1 00:07:01.117 --rc geninfo_all_blocks=1 00:07:01.117 --rc geninfo_unexecuted_blocks=1 00:07:01.117 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.117 ' 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.117 --rc genhtml_branch_coverage=1 00:07:01.117 --rc genhtml_function_coverage=1 00:07:01.117 --rc genhtml_legend=1 00:07:01.117 --rc geninfo_all_blocks=1 00:07:01.117 --rc geninfo_unexecuted_blocks=1 00:07:01.117 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.117 ' 00:07:01.117 22:14:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.117 22:14:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.117 22:14:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3099451 00:07:01.117 22:14:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3099451 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3099451 ']' 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.117 22:14:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.117 [2024-10-29 22:14:20.468441] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:01.117 [2024-10-29 22:14:20.468501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099451 ] 00:07:01.117 [2024-10-29 22:14:20.551721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.117 [2024-10-29 22:14:20.600181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.377 22:14:20 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.377 22:14:20 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:01.377 22:14:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.636 { 00:07:01.636 "version": "SPDK v25.01-pre git sha1 344e7bdd4", 00:07:01.636 "fields": { 00:07:01.636 "major": 25, 00:07:01.636 "minor": 1, 00:07:01.636 "patch": 0, 00:07:01.636 "suffix": "-pre", 00:07:01.636 "commit": "344e7bdd4" 00:07:01.636 } 00:07:01.636 } 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.636 22:14:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.636 22:14:21 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.897 request: 00:07:01.897 { 00:07:01.897 "method": "env_dpdk_get_mem_stats", 00:07:01.897 "req_id": 1 00:07:01.897 } 00:07:01.897 Got JSON-RPC error response 00:07:01.897 response: 00:07:01.897 { 00:07:01.897 "code": -32601, 00:07:01.897 "message": "Method not found" 00:07:01.897 } 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.897 22:14:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3099451 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3099451 ']' 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3099451 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3099451 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3099451' 00:07:01.897 killing process with pid 3099451 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@971 -- # kill 3099451 00:07:01.897 22:14:21 app_cmdline -- common/autotest_common.sh@976 -- # wait 3099451 00:07:02.157 00:07:02.157 real 0m1.360s 00:07:02.157 user 0m1.564s 00:07:02.157 sys 0m0.486s 00:07:02.157 22:14:21 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.157 22:14:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.157 ************************************ 00:07:02.157 END TEST app_cmdline 00:07:02.157 ************************************ 00:07:02.157 22:14:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:02.157 22:14:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.157 22:14:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.157 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.416 ************************************ 00:07:02.416 START TEST version 00:07:02.416 ************************************ 00:07:02.416 22:14:21 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:02.416 * Looking for test storage... 00:07:02.416 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:02.416 22:14:21 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.416 22:14:21 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.416 22:14:21 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.416 22:14:21 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.416 22:14:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.416 22:14:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.416 22:14:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.416 22:14:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.416 22:14:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.416 22:14:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.416 22:14:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.416 22:14:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.416 22:14:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.416 22:14:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.416 22:14:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.416 22:14:21 version -- scripts/common.sh@344 -- # case "$op" in 00:07:02.416 22:14:21 version -- scripts/common.sh@345 -- # : 1 00:07:02.416 22:14:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.416 22:14:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.416 22:14:21 version -- scripts/common.sh@365 -- # decimal 1 00:07:02.416 22:14:21 version -- scripts/common.sh@353 -- # local d=1 00:07:02.417 22:14:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.417 22:14:21 version -- scripts/common.sh@355 -- # echo 1 00:07:02.417 22:14:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.417 22:14:21 version -- scripts/common.sh@366 -- # decimal 2 00:07:02.417 22:14:21 version -- scripts/common.sh@353 -- # local d=2 00:07:02.417 22:14:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.417 22:14:21 version -- scripts/common.sh@355 -- # echo 2 00:07:02.417 22:14:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.417 22:14:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.417 22:14:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.417 22:14:21 version -- scripts/common.sh@368 -- # return 0 00:07:02.417 22:14:21 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.417 22:14:21 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.417 --rc genhtml_branch_coverage=1 00:07:02.417 --rc genhtml_function_coverage=1 00:07:02.417 --rc genhtml_legend=1 00:07:02.417 --rc geninfo_all_blocks=1 00:07:02.417 --rc geninfo_unexecuted_blocks=1 00:07:02.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.417 ' 00:07:02.417 22:14:21 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.417 --rc genhtml_branch_coverage=1 00:07:02.417 --rc genhtml_function_coverage=1 00:07:02.417 --rc genhtml_legend=1 00:07:02.417 --rc geninfo_all_blocks=1 00:07:02.417 --rc geninfo_unexecuted_blocks=1 00:07:02.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.417 ' 00:07:02.417 22:14:21 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.417 --rc genhtml_branch_coverage=1 00:07:02.417 --rc genhtml_function_coverage=1 00:07:02.417 --rc genhtml_legend=1 00:07:02.417 --rc geninfo_all_blocks=1 00:07:02.417 --rc geninfo_unexecuted_blocks=1 00:07:02.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.417 ' 00:07:02.417 22:14:21 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.417 --rc genhtml_branch_coverage=1 00:07:02.417 --rc genhtml_function_coverage=1 00:07:02.417 --rc genhtml_legend=1 00:07:02.417 --rc geninfo_all_blocks=1 00:07:02.417 --rc geninfo_unexecuted_blocks=1 00:07:02.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.417 ' 00:07:02.417 22:14:21 version -- app/version.sh@17 -- # get_header_version major 00:07:02.417 22:14:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # cut -f2 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.417 22:14:21 version -- app/version.sh@17 -- # major=25 00:07:02.417 22:14:21 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.417 22:14:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # cut -f2 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.417 22:14:21 version -- app/version.sh@18 -- # minor=1 00:07:02.417 22:14:21 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.417 22:14:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # cut -f2 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.417 22:14:21 version -- app/version.sh@19 -- # patch=0 00:07:02.417 22:14:21 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.417 22:14:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # cut -f2 00:07:02.417 22:14:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.417 22:14:21 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.417 22:14:21 version -- app/version.sh@22 -- # version=25.1 00:07:02.417 22:14:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.417 22:14:21 version -- app/version.sh@28 -- # version=25.1rc0 00:07:02.417 22:14:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:02.417 22:14:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.695 22:14:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:02.695 22:14:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:02.695 00:07:02.695 real 0m0.270s 00:07:02.695 user 0m0.161s 00:07:02.695 sys 0m0.164s 00:07:02.695 22:14:21 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.695 22:14:21 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.695 ************************************ 00:07:02.695 END TEST version 00:07:02.695 ************************************ 00:07:02.695 22:14:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@194 -- # uname -s 00:07:02.695 22:14:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:02.695 22:14:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.695 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:07:02.695 22:14:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:02.695 22:14:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:07:02.695 22:14:22 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:02.695 22:14:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.695 22:14:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.695 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:07:02.695 ************************************ 00:07:02.695 START TEST llvm_fuzz 00:07:02.695 ************************************ 00:07:02.695 22:14:22 llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:02.695 * Looking for test storage... 00:07:02.695 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:02.695 22:14:22 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.695 22:14:22 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.695 22:14:22 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.954 22:14:22 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.954 22:14:22 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:02.954 22:14:22 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.954 22:14:22 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.954 --rc genhtml_branch_coverage=1 00:07:02.954 --rc genhtml_function_coverage=1 00:07:02.954 --rc genhtml_legend=1 00:07:02.954 --rc geninfo_all_blocks=1 00:07:02.954 --rc geninfo_unexecuted_blocks=1 00:07:02.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.954 ' 00:07:02.954 22:14:22 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.955 --rc genhtml_branch_coverage=1 00:07:02.955 --rc genhtml_function_coverage=1 00:07:02.955 --rc genhtml_legend=1 00:07:02.955 --rc geninfo_all_blocks=1 00:07:02.955 --rc geninfo_unexecuted_blocks=1 00:07:02.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.955 ' 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.955 --rc genhtml_branch_coverage=1 00:07:02.955 --rc genhtml_function_coverage=1 00:07:02.955 --rc genhtml_legend=1 00:07:02.955 --rc geninfo_all_blocks=1 00:07:02.955 --rc geninfo_unexecuted_blocks=1 00:07:02.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.955 ' 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.955 --rc genhtml_branch_coverage=1 00:07:02.955 --rc genhtml_function_coverage=1 00:07:02.955 --rc genhtml_legend=1 00:07:02.955 --rc geninfo_all_blocks=1 00:07:02.955 --rc geninfo_unexecuted_blocks=1 00:07:02.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.955 ' 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:02.955 22:14:22 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.955 22:14:22 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:02.955 ************************************ 00:07:02.955 START TEST nvmf_llvm_fuzz 00:07:02.955 ************************************ 00:07:02.955 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:02.955 * Looking for test storage... 00:07:02.955 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:02.955 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.955 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.955 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.217 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.218 --rc genhtml_branch_coverage=1 00:07:03.218 --rc genhtml_function_coverage=1 00:07:03.218 --rc genhtml_legend=1 00:07:03.218 --rc geninfo_all_blocks=1 00:07:03.218 --rc geninfo_unexecuted_blocks=1 00:07:03.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.218 ' 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.218 --rc genhtml_branch_coverage=1 00:07:03.218 --rc genhtml_function_coverage=1 00:07:03.218 --rc genhtml_legend=1 00:07:03.218 --rc geninfo_all_blocks=1 00:07:03.218 --rc geninfo_unexecuted_blocks=1 00:07:03.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.218 ' 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.218 --rc genhtml_branch_coverage=1 00:07:03.218 --rc genhtml_function_coverage=1 00:07:03.218 --rc genhtml_legend=1 00:07:03.218 --rc geninfo_all_blocks=1 00:07:03.218 --rc geninfo_unexecuted_blocks=1 00:07:03.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.218 ' 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.218 --rc genhtml_branch_coverage=1 00:07:03.218 --rc genhtml_function_coverage=1 00:07:03.218 --rc genhtml_legend=1 00:07:03.218 --rc geninfo_all_blocks=1 00:07:03.218 --rc geninfo_unexecuted_blocks=1 00:07:03.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.218 ' 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:03.218 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:03.219 #define SPDK_CONFIG_H 00:07:03.219 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:03.219 #define SPDK_CONFIG_APPS 1 00:07:03.219 #define SPDK_CONFIG_ARCH native 00:07:03.219 #undef SPDK_CONFIG_ASAN 00:07:03.219 #undef SPDK_CONFIG_AVAHI 00:07:03.219 #undef SPDK_CONFIG_CET 00:07:03.219 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:03.219 #define SPDK_CONFIG_COVERAGE 1 00:07:03.219 #define SPDK_CONFIG_CROSS_PREFIX 00:07:03.219 #undef SPDK_CONFIG_CRYPTO 00:07:03.219 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:03.219 #undef SPDK_CONFIG_CUSTOMOCF 00:07:03.219 #undef SPDK_CONFIG_DAOS 00:07:03.219 #define SPDK_CONFIG_DAOS_DIR 00:07:03.219 #define SPDK_CONFIG_DEBUG 1 00:07:03.219 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:03.219 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:03.219 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:03.219 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:03.219 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:03.219 #undef SPDK_CONFIG_DPDK_UADK 00:07:03.219 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:03.219 #define SPDK_CONFIG_EXAMPLES 1 00:07:03.219 #undef SPDK_CONFIG_FC 00:07:03.219 #define SPDK_CONFIG_FC_PATH 00:07:03.219 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:03.219 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:03.219 #define SPDK_CONFIG_FSDEV 1 00:07:03.219 #undef SPDK_CONFIG_FUSE 00:07:03.219 #define SPDK_CONFIG_FUZZER 1 00:07:03.219 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:03.219 #undef SPDK_CONFIG_GOLANG 00:07:03.219 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:03.219 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:03.219 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:03.219 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:03.219 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:03.219 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:03.219 #undef SPDK_CONFIG_HAVE_LZ4 00:07:03.219 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:03.219 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:03.219 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:03.219 #define SPDK_CONFIG_IDXD 1 00:07:03.219 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:03.219 #undef SPDK_CONFIG_IPSEC_MB 00:07:03.219 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:03.219 #define SPDK_CONFIG_ISAL 1 00:07:03.219 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:03.219 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:03.219 #define SPDK_CONFIG_LIBDIR 00:07:03.219 #undef SPDK_CONFIG_LTO 00:07:03.219 #define SPDK_CONFIG_MAX_LCORES 128 00:07:03.219 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:03.219 #define SPDK_CONFIG_NVME_CUSE 1 00:07:03.219 #undef SPDK_CONFIG_OCF 00:07:03.219 #define SPDK_CONFIG_OCF_PATH 00:07:03.219 #define SPDK_CONFIG_OPENSSL_PATH 00:07:03.219 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:03.219 #define SPDK_CONFIG_PGO_DIR 00:07:03.219 #undef SPDK_CONFIG_PGO_USE 00:07:03.219 #define SPDK_CONFIG_PREFIX /usr/local 00:07:03.219 #undef SPDK_CONFIG_RAID5F 00:07:03.219 #undef SPDK_CONFIG_RBD 00:07:03.219 #define SPDK_CONFIG_RDMA 1 00:07:03.219 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:03.219 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:03.219 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:03.219 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:03.219 #undef SPDK_CONFIG_SHARED 00:07:03.219 #undef SPDK_CONFIG_SMA 00:07:03.219 #define SPDK_CONFIG_TESTS 1 00:07:03.219 #undef SPDK_CONFIG_TSAN 00:07:03.219 #define SPDK_CONFIG_UBLK 1 00:07:03.219 #define SPDK_CONFIG_UBSAN 1 00:07:03.219 #undef SPDK_CONFIG_UNIT_TESTS 00:07:03.219 #undef SPDK_CONFIG_URING 00:07:03.219 #define SPDK_CONFIG_URING_PATH 00:07:03.219 #undef SPDK_CONFIG_URING_ZNS 00:07:03.219 #undef SPDK_CONFIG_USDT 00:07:03.219 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:03.219 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:03.219 #define SPDK_CONFIG_VFIO_USER 1 00:07:03.219 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:03.219 #define SPDK_CONFIG_VHOST 1 00:07:03.219 #define SPDK_CONFIG_VIRTIO 1 00:07:03.219 #undef SPDK_CONFIG_VTUNE 00:07:03.219 #define SPDK_CONFIG_VTUNE_DIR 00:07:03.219 #define SPDK_CONFIG_WERROR 1 00:07:03.219 #define SPDK_CONFIG_WPDK_DIR 00:07:03.219 #undef SPDK_CONFIG_XNVME 00:07:03.219 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:03.219 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.220 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3099966 ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3099966 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.1WFBMF 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.1WFBMF/tests/nvmf /tmp/spdk.1WFBMF 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86684499968 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500360192 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7815860224 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245414400 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250178048 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:03.221 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894336000 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5738496 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249584128 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250182144 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=598016 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450020864 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450033152 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:03.222 * Looking for test storage... 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86684499968 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10030452736 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:03.222 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.222 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.482 --rc genhtml_branch_coverage=1 00:07:03.482 --rc genhtml_function_coverage=1 00:07:03.482 --rc genhtml_legend=1 00:07:03.482 --rc geninfo_all_blocks=1 00:07:03.482 --rc geninfo_unexecuted_blocks=1 00:07:03.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.482 ' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.482 --rc genhtml_branch_coverage=1 00:07:03.482 --rc genhtml_function_coverage=1 00:07:03.482 --rc genhtml_legend=1 00:07:03.482 --rc geninfo_all_blocks=1 00:07:03.482 --rc geninfo_unexecuted_blocks=1 00:07:03.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.482 ' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.482 --rc genhtml_branch_coverage=1 00:07:03.482 --rc genhtml_function_coverage=1 00:07:03.482 --rc genhtml_legend=1 00:07:03.482 --rc geninfo_all_blocks=1 00:07:03.482 --rc geninfo_unexecuted_blocks=1 00:07:03.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.482 ' 00:07:03.482 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.482 --rc genhtml_branch_coverage=1 00:07:03.482 --rc genhtml_function_coverage=1 00:07:03.482 --rc genhtml_legend=1 00:07:03.482 --rc geninfo_all_blocks=1 00:07:03.482 --rc geninfo_unexecuted_blocks=1 00:07:03.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.483 ' 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.483 22:14:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:03.483 [2024-10-29 22:14:22.849977] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:03.483 [2024-10-29 22:14:22.850052] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100024 ] 00:07:03.743 [2024-10-29 22:14:23.059003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.743 [2024-10-29 22:14:23.097444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.743 [2024-10-29 22:14:23.156645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.743 [2024-10-29 22:14:23.172802] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:03.743 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.743 INFO: Seed: 1710674679 00:07:03.743 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:03.743 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:03.743 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:03.743 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.743 #2 INITED exec/s: 0 rss: 66Mb 00:07:03.743 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.743 This may also happen if the target rejected all inputs we tried so far 00:07:03.743 [2024-10-29 22:14:23.231816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:03.743 [2024-10-29 22:14:23.231854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.263 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:04.263 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.263 #28 NEW cov: 12192 ft: 12192 corp: 2/71b lim: 320 exec/s: 0 rss: 74Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:07:04.263 [2024-10-29 22:14:23.573016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.263 [2024-10-29 22:14:23.573079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.263 #29 NEW cov: 12307 ft: 12980 corp: 3/180b lim: 320 exec/s: 0 rss: 74Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:07:04.263 [2024-10-29 22:14:23.622703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:bfbfbfbf cdw10:bfbfbfbf cdw11:bfbfbfbf SGL TRANSPORT DATA BLOCK TRANSPORT 0xbfbfbfbfbfbfbfbf 00:07:04.263 [2024-10-29 22:14:23.622733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.263 #31 NEW cov: 12313 ft: 13211 corp: 4/248b lim: 320 exec/s: 0 rss: 74Mb L: 68/109 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:04.263 [2024-10-29 22:14:23.662793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.263 [2024-10-29 22:14:23.662820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.263 #32 NEW cov: 12398 ft: 13453 corp: 5/334b lim: 320 exec/s: 0 rss: 74Mb L: 86/109 MS: 1 CopyPart- 00:07:04.263 [2024-10-29 22:14:23.722964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.263 [2024-10-29 22:14:23.722991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.263 #33 NEW cov: 12398 ft: 13535 corp: 6/404b lim: 320 exec/s: 0 rss: 74Mb L: 70/109 MS: 1 ChangeBinInt- 00:07:04.263 [2024-10-29 22:14:23.763045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.263 [2024-10-29 22:14:23.763072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #34 NEW cov: 12398 ft: 13607 corp: 7/474b lim: 320 exec/s: 0 rss: 74Mb L: 70/109 MS: 1 ChangeBit- 00:07:04.523 [2024-10-29 22:14:23.823212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.523 [2024-10-29 22:14:23.823240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #35 NEW cov: 12398 ft: 13678 corp: 8/583b lim: 320 exec/s: 0 rss: 74Mb L: 109/109 MS: 1 ChangeBit- 00:07:04.523 [2024-10-29 22:14:23.883558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.523 [2024-10-29 22:14:23.883585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 [2024-10-29 22:14:23.883645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:5 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.523 [2024-10-29 22:14:23.883663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.523 #36 NEW cov: 12398 ft: 13969 corp: 9/729b lim: 320 exec/s: 0 rss: 74Mb L: 146/146 MS: 1 CopyPart- 00:07:04.523 [2024-10-29 22:14:23.943606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:f3a32a2a cdw11:e53196f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.523 [2024-10-29 22:14:23.943633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #37 NEW cov: 12398 ft: 13984 corp: 10/807b lim: 320 exec/s: 0 rss: 74Mb L: 78/146 MS: 1 CMP- DE: "\243\363\361\2261\3455\000"- 00:07:04.523 [2024-10-29 22:14:23.983625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.523 [2024-10-29 22:14:23.983651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #38 NEW cov: 12398 ft: 14039 corp: 11/877b lim: 320 exec/s: 0 rss: 74Mb L: 70/146 MS: 1 CopyPart- 00:07:04.523 [2024-10-29 22:14:24.023743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.523 [2024-10-29 22:14:24.023770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #39 NEW cov: 12398 ft: 14052 corp: 12/994b lim: 320 exec/s: 0 rss: 74Mb L: 117/146 MS: 1 PersAutoDict- DE: "\243\363\361\2261\3455\000"- 00:07:04.782 [2024-10-29 22:14:24.063897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2b cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.782 [2024-10-29 22:14:24.063924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.782 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:04.782 #42 NEW cov: 12421 ft: 14079 corp: 13/1081b lim: 320 exec/s: 0 rss: 74Mb L: 87/146 MS: 3 EraseBytes-CopyPart-CopyPart- 00:07:04.782 [2024-10-29 22:14:24.124057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.782 [2024-10-29 22:14:24.124084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.782 #43 NEW cov: 12421 ft: 14149 corp: 14/1200b lim: 320 exec/s: 0 rss: 74Mb L: 119/146 MS: 1 CMP- DE: "\000\006"- 00:07:04.782 [2024-10-29 22:14:24.184165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e7) qid:0 cid:4 nsid:e7e7e7e7 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xe7e7e7e7e7e7e7e7 00:07:04.782 [2024-10-29 22:14:24.184191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.782 NEW_FUNC[1/1]: 0x1529e58 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:07:04.782 #51 NEW cov: 12452 ft: 14255 corp: 15/1274b lim: 320 exec/s: 0 rss: 74Mb L: 74/146 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:04.782 [2024-10-29 22:14:24.224308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.782 [2024-10-29 22:14:24.224334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.782 #52 NEW cov: 12452 ft: 14263 corp: 16/1344b lim: 320 exec/s: 52 rss: 74Mb L: 70/146 MS: 1 PersAutoDict- DE: "\000\006"- 00:07:04.782 [2024-10-29 22:14:24.264417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:f3a32a2a cdw11:e53296f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:04.782 [2024-10-29 22:14:24.264444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.782 #53 NEW cov: 12452 ft: 14336 corp: 17/1422b lim: 320 exec/s: 53 rss: 74Mb L: 78/146 MS: 1 ChangeASCIIInt- 00:07:05.042 [2024-10-29 22:14:24.324570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2aff2a2a 00:07:05.042 [2024-10-29 22:14:24.324597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.042 #54 NEW cov: 12452 ft: 14365 corp: 18/1493b lim: 320 exec/s: 54 rss: 74Mb L: 71/146 MS: 1 InsertByte- 00:07:05.042 [2024-10-29 22:14:24.384784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:f3a32a2a cdw11:e53396f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.042 [2024-10-29 22:14:24.384809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.042 #55 NEW cov: 12452 ft: 14401 corp: 19/1571b lim: 320 exec/s: 55 rss: 75Mb L: 78/146 MS: 1 ChangeASCIIInt- 00:07:05.042 [2024-10-29 22:14:24.444948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.042 [2024-10-29 22:14:24.444974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.042 #56 NEW cov: 12452 ft: 14427 corp: 20/1641b lim: 320 exec/s: 56 rss: 75Mb L: 70/146 MS: 1 ShuffleBytes- 00:07:05.042 [2024-10-29 22:14:24.485049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.042 [2024-10-29 22:14:24.485074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.042 #57 NEW cov: 12452 ft: 14450 corp: 21/1750b lim: 320 exec/s: 57 rss: 75Mb L: 109/146 MS: 1 ChangeBit- 00:07:05.042 [2024-10-29 22:14:24.525262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.042 [2024-10-29 22:14:24.525288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.042 [2024-10-29 22:14:24.525349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:2a2a0000 cdw11:2a2a2a2a 00:07:05.042 [2024-10-29 22:14:24.525363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.302 #58 NEW cov: 12452 ft: 14506 corp: 22/1919b lim: 320 exec/s: 58 rss: 75Mb L: 169/169 MS: 1 InsertRepeatedBytes- 00:07:05.302 [2024-10-29 22:14:24.585320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.302 [2024-10-29 22:14:24.585346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.302 #59 NEW cov: 12452 ft: 14512 corp: 23/2037b lim: 320 exec/s: 59 rss: 75Mb L: 118/169 MS: 1 InsertByte- 00:07:05.302 [2024-10-29 22:14:24.625450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:b5b5b5b5 cdw11:b5b5b5b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb5b5b5002a000000 00:07:05.302 [2024-10-29 22:14:24.625478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.302 #62 NEW cov: 12452 ft: 14516 corp: 24/2122b lim: 320 exec/s: 62 rss: 75Mb L: 85/169 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:05.302 [2024-10-29 22:14:24.685631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:f3a32a2a cdw11:e53296f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.302 [2024-10-29 22:14:24.685657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.302 #63 NEW cov: 12452 ft: 14539 corp: 25/2200b lim: 320 exec/s: 63 rss: 75Mb L: 78/169 MS: 1 ChangeBit- 00:07:05.302 [2024-10-29 22:14:24.725752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.302 [2024-10-29 22:14:24.725778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.302 #64 NEW cov: 12452 ft: 14623 corp: 26/2317b lim: 320 exec/s: 64 rss: 75Mb L: 117/169 MS: 1 ChangeASCIIInt- 00:07:05.302 [2024-10-29 22:14:24.765992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:2a2a0000 cdw10:2a2a2a2a cdw11:f3a32a2a 00:07:05.302 [2024-10-29 22:14:24.766019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.302 [2024-10-29 22:14:24.766070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.302 [2024-10-29 22:14:24.766085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.302 #65 NEW cov: 12452 ft: 14645 corp: 27/2484b lim: 320 exec/s: 65 rss: 75Mb L: 167/169 MS: 1 CrossOver- 00:07:05.561 [2024-10-29 22:14:24.826059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.561 [2024-10-29 22:14:24.826087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #66 NEW cov: 12452 ft: 14693 corp: 28/2554b lim: 320 exec/s: 66 rss: 75Mb L: 70/169 MS: 1 ChangeBit- 00:07:05.561 [2024-10-29 22:14:24.866153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.561 [2024-10-29 22:14:24.866179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #67 NEW cov: 12452 ft: 14701 corp: 29/2624b lim: 320 exec/s: 67 rss: 75Mb L: 70/169 MS: 1 ShuffleBytes- 00:07:05.561 [2024-10-29 22:14:24.906265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.561 [2024-10-29 22:14:24.906292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #68 NEW cov: 12452 ft: 14704 corp: 30/2742b lim: 320 exec/s: 68 rss: 75Mb L: 118/169 MS: 1 InsertByte- 00:07:05.561 [2024-10-29 22:14:24.946349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4f) qid:0 cid:4 nsid:2020202 cdw10:02020202 cdw11:02020202 SGL TRANSPORT DATA BLOCK TRANSPORT 0x202020202020202 00:07:05.561 [2024-10-29 22:14:24.946375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #73 NEW cov: 12452 ft: 14733 corp: 31/2847b lim: 320 exec/s: 73 rss: 75Mb L: 105/169 MS: 5 ChangeByte-ChangeByte-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:05.561 [2024-10-29 22:14:24.986573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.561 [2024-10-29 22:14:24.986600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #74 NEW cov: 12452 ft: 14737 corp: 32/2956b lim: 320 exec/s: 74 rss: 75Mb L: 109/169 MS: 1 ChangeBit- 00:07:05.561 [2024-10-29 22:14:25.026656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.561 [2024-10-29 22:14:25.026682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #75 NEW cov: 12452 ft: 14781 corp: 33/3027b lim: 320 exec/s: 75 rss: 75Mb L: 71/169 MS: 1 InsertByte- 00:07:05.561 [2024-10-29 22:14:25.066748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.561 [2024-10-29 22:14:25.066777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.821 #78 NEW cov: 12452 ft: 14785 corp: 34/3129b lim: 320 exec/s: 78 rss: 75Mb L: 102/169 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:05.821 [2024-10-29 22:14:25.106895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:bfbfbfbf cdw10:bfbfbfbf cdw11:bfbfbfbf SGL TRANSPORT DATA BLOCK TRANSPORT 0xbfbfbfbfbfbfbfbf 00:07:05.821 [2024-10-29 22:14:25.106921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.821 #79 NEW cov: 12452 ft: 14819 corp: 35/3197b lim: 320 exec/s: 79 rss: 75Mb L: 68/169 MS: 1 ChangeBit- 00:07:05.821 [2024-10-29 22:14:25.167182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:f3a32a2a cdw11:e53296f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.821 [2024-10-29 22:14:25.167209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.821 [2024-10-29 22:14:25.167267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:5 nsid:2a2a2a2a cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a0035e53196f1 00:07:05.821 [2024-10-29 22:14:25.167281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.821 #80 NEW cov: 12452 ft: 14918 corp: 36/3334b lim: 320 exec/s: 80 rss: 75Mb L: 137/169 MS: 1 CrossOver- 00:07:05.821 [2024-10-29 22:14:25.227191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:2a2a2a2a cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x2a2a2a2a2a2a2a2a 00:07:05.821 [2024-10-29 22:14:25.227219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.821 #81 NEW cov: 12452 ft: 14922 corp: 37/3432b lim: 320 exec/s: 40 rss: 75Mb L: 98/169 MS: 1 CopyPart- 00:07:05.821 #81 DONE cov: 12452 ft: 14922 corp: 37/3432b lim: 320 exec/s: 40 rss: 75Mb 00:07:05.821 ###### Recommended dictionary. ###### 00:07:05.821 "\243\363\361\2261\3455\000" # Uses: 1 00:07:05.821 "\000\006" # Uses: 1 00:07:05.821 ###### End of recommended dictionary. ###### 00:07:05.821 Done 81 runs in 2 second(s) 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.081 22:14:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:06.081 [2024-10-29 22:14:25.397269] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:06.081 [2024-10-29 22:14:25.397364] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100385 ] 00:07:06.081 [2024-10-29 22:14:25.594293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.341 [2024-10-29 22:14:25.633277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.341 [2024-10-29 22:14:25.692523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.341 [2024-10-29 22:14:25.708715] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:06.341 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.341 INFO: Seed: 4245656611 00:07:06.341 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:06.341 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:06.341 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:06.341 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.341 #2 INITED exec/s: 0 rss: 66Mb 00:07:06.341 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.341 This may also happen if the target rejected all inputs we tried so far 00:07:06.341 [2024-10-29 22:14:25.764007] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.341 [2024-10-29 22:14:25.764422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.341 [2024-10-29 22:14:25.764451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.341 [2024-10-29 22:14:25.764502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.341 [2024-10-29 22:14:25.764516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.341 [2024-10-29 22:14:25.764567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.341 [2024-10-29 22:14:25.764581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.600 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:06.600 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.600 #5 NEW cov: 12287 ft: 12303 corp: 2/19b lim: 30 exec/s: 0 rss: 74Mb L: 18/18 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:06.600 [2024-10-29 22:14:26.105022] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.600 [2024-10-29 22:14:26.105161] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.600 [2024-10-29 22:14:26.105508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.600 [2024-10-29 22:14:26.105570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.600 [2024-10-29 22:14:26.105653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.600 [2024-10-29 22:14:26.105681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.600 [2024-10-29 22:14:26.105759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.600 [2024-10-29 22:14:26.105785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.859 NEW_FUNC[1/1]: 0xfbb008 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:445 00:07:06.859 #6 NEW cov: 12417 ft: 13062 corp: 3/38b lim: 30 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CrossOver- 00:07:06.859 [2024-10-29 22:14:26.175011] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.175522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.175550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.175604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.175619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.175672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.175687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.175738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.175753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.859 #12 NEW cov: 12423 ft: 13878 corp: 4/66b lim: 30 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 CopyPart- 00:07:06.859 [2024-10-29 22:14:26.215045] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:06.859 [2024-10-29 22:14:26.215362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.215389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.215444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.215459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.859 #20 NEW cov: 12514 ft: 14430 corp: 5/79b lim: 30 exec/s: 0 rss: 74Mb L: 13/28 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:07:06.859 [2024-10-29 22:14:26.255174] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.255289] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.255590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000060 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.255616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.255676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.255691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.255745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.255759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.859 #21 NEW cov: 12514 ft: 14535 corp: 6/98b lim: 30 exec/s: 0 rss: 74Mb L: 19/28 MS: 1 ChangeByte- 00:07:06.859 [2024-10-29 22:14:26.315381] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.315495] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.315602] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.315810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.315837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.315892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.315908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.315958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.315974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.859 #22 NEW cov: 12514 ft: 14667 corp: 7/117b lim: 30 exec/s: 0 rss: 74Mb L: 19/28 MS: 1 ChangeBinInt- 00:07:06.859 [2024-10-29 22:14:26.355481] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.355594] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:06.859 [2024-10-29 22:14:26.355698] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:06.859 [2024-10-29 22:14:26.356027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.356054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.356105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.356120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.356167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.356181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.859 [2024-10-29 22:14:26.356227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.859 [2024-10-29 22:14:26.356242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 #23 NEW cov: 12520 ft: 14784 corp: 8/141b lim: 30 exec/s: 0 rss: 75Mb L: 24/28 MS: 1 CrossOver- 00:07:07.119 [2024-10-29 22:14:26.415699] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:07.119 [2024-10-29 22:14:26.415998] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:07.119 [2024-10-29 22:14:26.416217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.416242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.416301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.416315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.416367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.416382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.416433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.416448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 #24 NEW cov: 12520 ft: 14862 corp: 9/166b lim: 30 exec/s: 0 rss: 75Mb L: 25/28 MS: 1 CrossOver- 00:07:07.119 [2024-10-29 22:14:26.455738] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.119 [2024-10-29 22:14:26.456136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.456163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.456217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.456232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.456286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.456307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 #25 NEW cov: 12520 ft: 14963 corp: 10/187b lim: 30 exec/s: 0 rss: 75Mb L: 21/28 MS: 1 InsertRepeatedBytes- 00:07:07.119 [2024-10-29 22:14:26.495797] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:07.119 [2024-10-29 22:14:26.496088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.496113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.496168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.496184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 #26 NEW cov: 12520 ft: 15031 corp: 11/204b lim: 30 exec/s: 0 rss: 75Mb L: 17/28 MS: 1 CrossOver- 00:07:07.119 [2024-10-29 22:14:26.556096] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.119 [2024-10-29 22:14:26.556215] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.119 [2024-10-29 22:14:26.556605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000060 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.556632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.556685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.556700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.556752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.556766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.556820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.556834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 #27 NEW cov: 12520 ft: 15050 corp: 12/231b lim: 30 exec/s: 0 rss: 75Mb L: 27/28 MS: 1 CopyPart- 00:07:07.119 [2024-10-29 22:14:26.616205] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:07.119 [2024-10-29 22:14:26.616609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.616636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.616691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.616707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-10-29 22:14:26.616761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-10-29 22:14:26.616776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:07.120 #28 NEW cov: 12543 ft: 15124 corp: 13/249b lim: 30 exec/s: 0 rss: 75Mb L: 18/28 MS: 1 EraseBytes- 00:07:07.379 [2024-10-29 22:14:26.656346] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:07.379 [2024-10-29 22:14:26.656647] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:07.379 [2024-10-29 22:14:26.656852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.656880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.656932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.656947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.657000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.657014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.657071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.657086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 #29 NEW cov: 12543 ft: 15175 corp: 14/278b lim: 30 exec/s: 0 rss: 75Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:07.379 [2024-10-29 22:14:26.716460] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.379 [2024-10-29 22:14:26.716760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.716788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.716843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.716858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 #30 NEW cov: 12543 ft: 15197 corp: 15/293b lim: 30 exec/s: 0 rss: 75Mb L: 15/29 MS: 1 EraseBytes- 00:07:07.379 [2024-10-29 22:14:26.756566] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:07.379 [2024-10-29 22:14:26.756881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.756907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 #31 NEW cov: 12544 ft: 15350 corp: 16/310b lim: 30 exec/s: 31 rss: 75Mb L: 17/29 MS: 1 ChangeBit- 00:07:07.379 [2024-10-29 22:14:26.816810] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:07:07.379 [2024-10-29 22:14:26.817305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.817331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.817384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.817399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.817453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.817467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.817521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00ff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.817535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 #32 NEW cov: 12544 ft: 15379 corp: 17/336b lim: 30 exec/s: 32 rss: 75Mb L: 26/29 MS: 1 InsertByte- 00:07:07.379 [2024-10-29 22:14:26.856805] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:07.379 [2024-10-29 22:14:26.857011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.857037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 #33 NEW cov: 12544 ft: 15642 corp: 18/347b lim: 30 exec/s: 33 rss: 75Mb L: 11/29 MS: 1 EraseBytes- 00:07:07.379 [2024-10-29 22:14:26.896948] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.379 [2024-10-29 22:14:26.897250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.897276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-10-29 22:14:26.897326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-10-29 22:14:26.897342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 #34 NEW cov: 12544 ft: 15682 corp: 19/362b lim: 30 exec/s: 34 rss: 75Mb L: 15/29 MS: 1 CopyPart- 00:07:07.639 [2024-10-29 22:14:26.957211] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (82988) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:26.957516] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:26.957730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:510a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:26.957757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:26.957814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:26.957828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:26.957881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:26.957896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:26.957949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:26.957964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.639 #35 NEW cov: 12544 ft: 15729 corp: 20/387b lim: 30 exec/s: 35 rss: 75Mb L: 25/29 MS: 1 ChangeByte- 00:07:07.639 [2024-10-29 22:14:26.997229] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (2052) 00:07:07.639 [2024-10-29 22:14:26.997443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:26.997468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 #36 NEW cov: 12544 ft: 15738 corp: 21/398b lim: 30 exec/s: 36 rss: 75Mb L: 11/29 MS: 1 ChangeBit- 00:07:07.639 [2024-10-29 22:14:27.057490] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (82988) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:27.057792] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:27.058011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:510a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.058038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.058092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.058110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.058162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.058177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.058230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.058244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.639 #37 NEW cov: 12544 ft: 15763 corp: 22/423b lim: 30 exec/s: 37 rss: 75Mb L: 25/29 MS: 1 ChangeBinInt- 00:07:07.639 [2024-10-29 22:14:27.117606] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:07.639 [2024-10-29 22:14:27.117921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.117948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.118004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.118020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 #38 NEW cov: 12544 ft: 15781 corp: 23/440b lim: 30 exec/s: 38 rss: 75Mb L: 17/29 MS: 1 ShuffleBytes- 00:07:07.639 [2024-10-29 22:14:27.157741] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10392) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:27.157859] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.639 [2024-10-29 22:14:27.158172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a250000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.158198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.158254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.158270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 [2024-10-29 22:14:27.158327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-10-29 22:14:27.158342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 #39 NEW cov: 12544 ft: 15827 corp: 24/459b lim: 30 exec/s: 39 rss: 75Mb L: 19/29 MS: 1 InsertByte- 00:07:07.899 [2024-10-29 22:14:27.217948] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.218257] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.218487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.218514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.218570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.218585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.218642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.218656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.218713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.218728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.899 #40 NEW cov: 12544 ft: 15848 corp: 25/488b lim: 30 exec/s: 40 rss: 75Mb L: 29/29 MS: 1 ChangeBit- 00:07:07.899 [2024-10-29 22:14:27.278035] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.278352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.278378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.278430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.278445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 #41 NEW cov: 12544 ft: 15868 corp: 26/501b lim: 30 exec/s: 41 rss: 75Mb L: 13/29 MS: 1 ChangeBinInt- 00:07:07.899 [2024-10-29 22:14:27.318189] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.318689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.318716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.318771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.318786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.318841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.318855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.318909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.318924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.899 #42 NEW cov: 12544 ft: 15907 corp: 27/530b lim: 30 exec/s: 42 rss: 75Mb L: 29/29 MS: 1 CrossOver- 00:07:07.899 [2024-10-29 22:14:27.358347] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.358480] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.358590] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (12032) > len (4) 00:07:07.899 [2024-10-29 22:14:27.358796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000060 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.358823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.358879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.358899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.358952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.358967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 #43 NEW cov: 12544 ft: 15960 corp: 28/549b lim: 30 exec/s: 43 rss: 75Mb L: 19/29 MS: 1 ChangeByte- 00:07:07.899 [2024-10-29 22:14:27.398370] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:07.899 [2024-10-29 22:14:27.398670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.398697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-10-29 22:14:27.398753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-10-29 22:14:27.398768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 #44 NEW cov: 12544 ft: 15965 corp: 29/564b lim: 30 exec/s: 44 rss: 75Mb L: 15/29 MS: 1 ChangeBinInt- 00:07:08.159 [2024-10-29 22:14:27.458566] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.458684] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.458793] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.459005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.459030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.459077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.459091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.459138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.459152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 #45 NEW cov: 12544 ft: 15973 corp: 30/583b lim: 30 exec/s: 45 rss: 75Mb L: 19/29 MS: 1 ChangeByte- 00:07:08.159 [2024-10-29 22:14:27.498666] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:08.159 [2024-10-29 22:14:27.498786] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000707 00:07:08.159 [2024-10-29 22:14:27.498895] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (390352) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.499101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.499127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.499181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.499197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.499252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7d3381e5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.499266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 #46 NEW cov: 12544 ft: 15987 corp: 31/601b lim: 30 exec/s: 46 rss: 75Mb L: 18/29 MS: 1 CMP- DE: "r\007\007}3\3455\000"- 00:07:08.159 [2024-10-29 22:14:27.538899] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:08.159 [2024-10-29 22:14:27.539016] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:08.159 [2024-10-29 22:14:27.539223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.539251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.539304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.539320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.539374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.539389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 #47 NEW cov: 12544 ft: 16001 corp: 32/619b lim: 30 exec/s: 47 rss: 75Mb L: 18/29 MS: 1 CrossOver- 00:07:08.159 [2024-10-29 22:14:27.578878] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.578997] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:07:08.159 [2024-10-29 22:14:27.579197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.579224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.579279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.579294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 #48 NEW cov: 12544 ft: 16002 corp: 33/633b lim: 30 exec/s: 48 rss: 76Mb L: 14/29 MS: 1 CrossOver- 00:07:08.159 [2024-10-29 22:14:27.639095] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.639210] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11152) > buf size (4096) 00:07:08.159 [2024-10-29 22:14:27.639619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000060 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.639646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.639699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0ae30000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.639714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.639765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.639780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 [2024-10-29 22:14:27.639835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-10-29 22:14:27.639850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 #49 NEW cov: 12544 ft: 16039 corp: 34/660b lim: 30 exec/s: 49 rss: 76Mb L: 27/29 MS: 1 ChangeByte- 00:07:08.419 [2024-10-29 22:14:27.699281] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:08.419 [2024-10-29 22:14:27.699584] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:08.419 [2024-10-29 22:14:27.699801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-10-29 22:14:27.699828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-10-29 22:14:27.699883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-10-29 22:14:27.699898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-10-29 22:14:27.699951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-10-29 22:14:27.699965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 [2024-10-29 22:14:27.700018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-10-29 22:14:27.700033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 #50 NEW cov: 12544 ft: 16040 corp: 35/689b lim: 30 exec/s: 50 rss: 76Mb L: 29/29 MS: 1 CrossOver- 00:07:08.419 [2024-10-29 22:14:27.739294] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:08.419 [2024-10-29 22:14:27.739512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-10-29 22:14:27.739536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 #51 NEW cov: 12544 ft: 16100 corp: 36/700b lim: 30 exec/s: 25 rss: 76Mb L: 11/29 MS: 1 EraseBytes- 00:07:08.419 #51 DONE cov: 12544 ft: 16100 corp: 36/700b lim: 30 exec/s: 25 rss: 76Mb 00:07:08.419 ###### Recommended dictionary. ###### 00:07:08.419 "r\007\007}3\3455\000" # Uses: 0 00:07:08.419 ###### End of recommended dictionary. ###### 00:07:08.419 Done 51 runs in 2 second(s) 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.419 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.420 22:14:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:08.420 [2024-10-29 22:14:27.922336] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:08.420 [2024-10-29 22:14:27.922406] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100739 ] 00:07:08.679 [2024-10-29 22:14:28.123903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.679 [2024-10-29 22:14:28.162709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.938 [2024-10-29 22:14:28.221829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.938 [2024-10-29 22:14:28.237987] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:08.938 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.938 INFO: Seed: 2480695546 00:07:08.938 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:08.938 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:08.938 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:08.938 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.938 #2 INITED exec/s: 0 rss: 66Mb 00:07:08.938 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.938 This may also happen if the target rejected all inputs we tried so far 00:07:08.938 [2024-10-29 22:14:28.303428] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:08.938 [2024-10-29 22:14:28.303556] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:08.938 [2024-10-29 22:14:28.303669] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:08.938 [2024-10-29 22:14:28.303781] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:08.938 [2024-10-29 22:14:28.304004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.938 [2024-10-29 22:14:28.304037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.938 [2024-10-29 22:14:28.304094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.938 [2024-10-29 22:14:28.304110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.938 [2024-10-29 22:14:28.304163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.938 [2024-10-29 22:14:28.304179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.938 [2024-10-29 22:14:28.304236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.938 [2024-10-29 22:14:28.304252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.197 NEW_FUNC[1/715]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:09.197 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.197 #9 NEW cov: 12209 ft: 12210 corp: 2/34b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:09.197 [2024-10-29 22:14:28.644434] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.197 [2024-10-29 22:14:28.644721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-10-29 22:14:28.644784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.197 #11 NEW cov: 12339 ft: 13503 corp: 3/41b lim: 35 exec/s: 0 rss: 74Mb L: 7/33 MS: 2 CrossOver-InsertByte- 00:07:09.197 [2024-10-29 22:14:28.714474] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.197 [2024-10-29 22:14:28.714600] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.197 [2024-10-29 22:14:28.714712] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.197 [2024-10-29 22:14:28.714823] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.197 [2024-10-29 22:14:28.715038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-10-29 22:14:28.715068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.197 [2024-10-29 22:14:28.715126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-10-29 22:14:28.715144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.197 [2024-10-29 22:14:28.715198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-10-29 22:14:28.715215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.197 [2024-10-29 22:14:28.715272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-10-29 22:14:28.715288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 #12 NEW cov: 12345 ft: 13795 corp: 4/74b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:09.457 [2024-10-29 22:14:28.754542] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.754666] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.754775] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.754883] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.755105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.755139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.755198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.755216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.755272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.755290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.755348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.755364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 #13 NEW cov: 12430 ft: 14011 corp: 5/108b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:07:09.457 [2024-10-29 22:14:28.814853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:000000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.814881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 #14 NEW cov: 12440 ft: 14243 corp: 6/115b lim: 35 exec/s: 0 rss: 74Mb L: 7/34 MS: 1 ChangeBit- 00:07:09.457 [2024-10-29 22:14:28.874922] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.875050] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.875163] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.875273] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.875411] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.875645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.875677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.875736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.875753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.875808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.875825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.875881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.875898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.875951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.875968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.457 #15 NEW cov: 12440 ft: 14352 corp: 7/150b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:09.457 [2024-10-29 22:14:28.935180] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.935313] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.935425] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.935653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.935678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.935729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.935745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.935795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.935810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.935862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.935876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 #16 NEW cov: 12440 ft: 14547 corp: 8/184b lim: 35 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:07:09.457 [2024-10-29 22:14:28.975210] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.975346] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.975462] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.975575] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.975693] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.457 [2024-10-29 22:14:28.975926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.975954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.976013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:23000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.976032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.976088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.976106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.976160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.976176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 [2024-10-29 22:14:28.976232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-10-29 22:14:28.976253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.716 #17 NEW cov: 12440 ft: 14565 corp: 9/219b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:09.716 [2024-10-29 22:14:29.035364] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.716 [2024-10-29 22:14:29.035486] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.716 [2024-10-29 22:14:29.035601] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.716 [2024-10-29 22:14:29.035713] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.035927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.035956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.036011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.036029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.036085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.036102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.036154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.036171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.717 #18 NEW cov: 12440 ft: 14647 corp: 10/253b lim: 35 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:09.717 [2024-10-29 22:14:29.075339] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.075562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00020000 cdw11:000000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.075592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.717 #19 NEW cov: 12440 ft: 14755 corp: 11/260b lim: 35 exec/s: 0 rss: 75Mb L: 7/35 MS: 1 ChangeBit- 00:07:09.717 [2024-10-29 22:14:29.115565] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.115688] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.115800] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.115911] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.116133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.116161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.116215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.116233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.116287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.116310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.116365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.116382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.717 #20 NEW cov: 12440 ft: 14777 corp: 12/294b lim: 35 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:09.717 [2024-10-29 22:14:29.175756] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.175877] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.175991] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.176105] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.176329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.176358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.176414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.176431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.176487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.176503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.176556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.176572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.717 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:09.717 #21 NEW cov: 12463 ft: 14819 corp: 13/328b lim: 35 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:07:09.717 [2024-10-29 22:14:29.215887] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.216011] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.216127] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.216242] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.216364] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.717 [2024-10-29 22:14:29.216591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.216620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.216678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.216694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.216753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.216769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.216825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.216842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.717 [2024-10-29 22:14:29.216897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.717 [2024-10-29 22:14:29.216913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.976 #22 NEW cov: 12463 ft: 14894 corp: 14/363b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:09.976 [2024-10-29 22:14:29.276058] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.276181] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.276295] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.276415] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.276524] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.276741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.276769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.276828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.276844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.276901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.276918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.276976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.276993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.277050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.277067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.976 #23 NEW cov: 12463 ft: 14983 corp: 15/398b lim: 35 exec/s: 23 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:09.976 [2024-10-29 22:14:29.316129] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.316248] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.316368] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.316480] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.316692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.316720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.316776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.316792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.316848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.316865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.316921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.316938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.976 #29 NEW cov: 12463 ft: 14987 corp: 16/432b lim: 35 exec/s: 29 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:07:09.976 [2024-10-29 22:14:29.356261] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.356388] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.356499] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.356612] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.356825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.356855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.356911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.356929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.356983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.356999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.357052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.357068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.976 #30 NEW cov: 12463 ft: 15007 corp: 17/466b lim: 35 exec/s: 30 rss: 75Mb L: 34/35 MS: 1 EraseBytes- 00:07:09.976 [2024-10-29 22:14:29.396486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:000000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.396524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.976 #31 NEW cov: 12463 ft: 15085 corp: 18/474b lim: 35 exec/s: 31 rss: 75Mb L: 8/35 MS: 1 CopyPart- 00:07:09.976 [2024-10-29 22:14:29.456890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00f60040 cdw11:f600f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.456919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.456977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f6f600f6 cdw11:f600f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.456992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.976 [2024-10-29 22:14:29.457047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f6f600f6 cdw11:f600f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.976 [2024-10-29 22:14:29.457062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.976 #35 NEW cov: 12463 ft: 15318 corp: 19/501b lim: 35 exec/s: 35 rss: 75Mb L: 27/35 MS: 4 EraseBytes-InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:09.976 [2024-10-29 22:14:29.496636] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.976 [2024-10-29 22:14:29.496761] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.977 [2024-10-29 22:14:29.496872] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:09.977 [2024-10-29 22:14:29.497086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.977 [2024-10-29 22:14:29.497116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.977 [2024-10-29 22:14:29.497173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.977 [2024-10-29 22:14:29.497190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.977 [2024-10-29 22:14:29.497244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.977 [2024-10-29 22:14:29.497261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.236 #36 NEW cov: 12463 ft: 15345 corp: 20/528b lim: 35 exec/s: 36 rss: 75Mb L: 27/35 MS: 1 EraseBytes- 00:07:10.236 [2024-10-29 22:14:29.537185] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.537426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:10100040 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.537453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.537510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.537525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.537580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.537595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.537650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.537665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.537720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.537740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.236 #37 NEW cov: 12463 ft: 15382 corp: 21/563b lim: 35 exec/s: 37 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:10.236 [2024-10-29 22:14:29.596953] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.597080] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.597189] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.597307] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.597521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.597549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.597605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:002a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.597623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.597677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.597694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.597748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.597764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.236 #38 NEW cov: 12463 ft: 15413 corp: 22/596b lim: 35 exec/s: 38 rss: 75Mb L: 33/35 MS: 1 ChangeByte- 00:07:10.236 [2024-10-29 22:14:29.637056] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.637176] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.637291] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.637411] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.637520] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.236 [2024-10-29 22:14:29.637750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.637778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.637836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:23000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.236 [2024-10-29 22:14:29.637853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.236 [2024-10-29 22:14:29.637909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.637926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.237 [2024-10-29 22:14:29.637979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.637999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.237 [2024-10-29 22:14:29.638055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00cb0000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.638071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.237 #39 NEW cov: 12463 ft: 15437 corp: 23/631b lim: 35 exec/s: 39 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:07:10.237 [2024-10-29 22:14:29.697226] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.237 [2024-10-29 22:14:29.697459] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.237 [2024-10-29 22:14:29.697573] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.237 [2024-10-29 22:14:29.697799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.697827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.237 [2024-10-29 22:14:29.697883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.697899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.237 [2024-10-29 22:14:29.697955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.697973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.237 [2024-10-29 22:14:29.698028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.698043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.237 #40 NEW cov: 12463 ft: 15441 corp: 24/665b lim: 35 exec/s: 40 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:10.237 [2024-10-29 22:14:29.737229] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.237 [2024-10-29 22:14:29.737454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.237 [2024-10-29 22:14:29.737482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.495 #41 NEW cov: 12463 ft: 15452 corp: 25/675b lim: 35 exec/s: 41 rss: 75Mb L: 10/35 MS: 1 CrossOver- 00:07:10.495 [2024-10-29 22:14:29.797493] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.797713] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.797831] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.798050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.798078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.798135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff700ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.798150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.798210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.798227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.798283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.798302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.495 #42 NEW cov: 12463 ft: 15455 corp: 26/709b lim: 35 exec/s: 42 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:10.495 [2024-10-29 22:14:29.837633] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.837756] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.837869] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.837980] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.838094] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.495 [2024-10-29 22:14:29.838317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.838346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.838404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.838421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.838479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.495 [2024-10-29 22:14:29.838496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.495 [2024-10-29 22:14:29.838553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.838571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.838629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.838645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.496 #43 NEW cov: 12463 ft: 15480 corp: 27/744b lim: 35 exec/s: 43 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:07:10.496 [2024-10-29 22:14:29.897780] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.897903] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.898016] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.898126] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.898356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.898384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.898447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:002a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.898463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.898521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.898537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.898593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.898608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.496 #44 NEW cov: 12463 ft: 15512 corp: 28/778b lim: 35 exec/s: 44 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:07:10.496 [2024-10-29 22:14:29.957987] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.958109] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.958223] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.958340] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.958453] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.958678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.958705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.958762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:31000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.958780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.958835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.958852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.958907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.958923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.958978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.958995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.496 #45 NEW cov: 12463 ft: 15520 corp: 29/813b lim: 35 exec/s: 45 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:07:10.496 [2024-10-29 22:14:29.997971] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.496 [2024-10-29 22:14:29.998294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.998328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.496 [2024-10-29 22:14:29.998388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff700ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.496 [2024-10-29 22:14:29.998403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.754 #46 NEW cov: 12463 ft: 15722 corp: 30/830b lim: 35 exec/s: 46 rss: 75Mb L: 17/35 MS: 1 CrossOver- 00:07:10.754 [2024-10-29 22:14:30.058211] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.058461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f30000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.058490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.754 #47 NEW cov: 12463 ft: 15731 corp: 31/837b lim: 35 exec/s: 47 rss: 75Mb L: 7/35 MS: 1 CrossOver- 00:07:10.754 [2024-10-29 22:14:30.098408] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.098537] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.098661] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.098773] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.098990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.099020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.099078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.099095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.099152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.099168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.099227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.099243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.754 #48 NEW cov: 12463 ft: 15766 corp: 32/871b lim: 35 exec/s: 48 rss: 75Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:10.754 [2024-10-29 22:14:30.138649] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.138981] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.139214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:10100040 cdw11:00001000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.139242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.139303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f3000000 cdw11:10000010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.139322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.139380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.139398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.139457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.139471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.139530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.139547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.754 #49 NEW cov: 12463 ft: 15830 corp: 33/906b lim: 35 exec/s: 49 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:07:10.754 [2024-10-29 22:14:30.199150] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.754 [2024-10-29 22:14:30.199407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:10100040 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.199435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.199495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.199511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.199568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.199583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.199641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:10100010 cdw11:10001010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.754 [2024-10-29 22:14:30.199656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.754 [2024-10-29 22:14:30.199716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.755 [2024-10-29 22:14:30.199733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.755 #50 NEW cov: 12463 ft: 15848 corp: 34/941b lim: 35 exec/s: 50 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:07:10.755 [2024-10-29 22:14:30.238816] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.755 [2024-10-29 22:14:30.238940] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.755 [2024-10-29 22:14:30.239057] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.755 [2024-10-29 22:14:30.239174] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:10.755 [2024-10-29 22:14:30.239406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.755 [2024-10-29 22:14:30.239435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.755 [2024-10-29 22:14:30.239493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.755 [2024-10-29 22:14:30.239511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.755 [2024-10-29 22:14:30.239573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.755 [2024-10-29 22:14:30.239590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.755 [2024-10-29 22:14:30.239647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.755 [2024-10-29 22:14:30.239663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.755 #51 NEW cov: 12463 ft: 15884 corp: 35/975b lim: 35 exec/s: 51 rss: 76Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:11.013 [2024-10-29 22:14:30.278853] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:11.013 [2024-10-29 22:14:30.279193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.013 [2024-10-29 22:14:30.279222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.013 [2024-10-29 22:14:30.279281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff700ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.013 [2024-10-29 22:14:30.279302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.013 #52 NEW cov: 12463 ft: 15950 corp: 36/993b lim: 35 exec/s: 26 rss: 76Mb L: 18/35 MS: 1 InsertByte- 00:07:11.013 #52 DONE cov: 12463 ft: 15950 corp: 36/993b lim: 35 exec/s: 26 rss: 76Mb 00:07:11.013 Done 52 runs in 2 second(s) 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.013 22:14:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:11.013 [2024-10-29 22:14:30.475862] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:11.013 [2024-10-29 22:14:30.475927] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101098 ] 00:07:11.272 [2024-10-29 22:14:30.677011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.272 [2024-10-29 22:14:30.714936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.272 [2024-10-29 22:14:30.774344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.272 [2024-10-29 22:14:30.790533] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:11.530 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.530 INFO: Seed: 738728813 00:07:11.530 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:11.530 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:11.530 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:11.530 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.530 #2 INITED exec/s: 0 rss: 66Mb 00:07:11.530 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.530 This may also happen if the target rejected all inputs we tried so far 00:07:11.788 NEW_FUNC[1/704]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:11.788 NEW_FUNC[2/704]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.788 #4 NEW cov: 12119 ft: 12120 corp: 2/14b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:11.788 #5 NEW cov: 12249 ft: 12841 corp: 3/27b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:11.788 #11 NEW cov: 12255 ft: 13147 corp: 4/40b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:12.047 #12 NEW cov: 12340 ft: 13414 corp: 5/53b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:12.047 #13 NEW cov: 12340 ft: 13614 corp: 6/66b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:12.047 #19 NEW cov: 12340 ft: 13667 corp: 7/79b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:12.047 #20 NEW cov: 12340 ft: 13764 corp: 8/92b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:12.047 #21 NEW cov: 12340 ft: 13793 corp: 9/106b lim: 20 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 InsertByte- 00:07:12.305 #23 NEW cov: 12341 ft: 14122 corp: 10/115b lim: 20 exec/s: 0 rss: 74Mb L: 9/14 MS: 2 ShuffleBytes-CMP- DE: "\3774\3455\246\231\226\350"- 00:07:12.305 #24 NEW cov: 12341 ft: 14181 corp: 11/128b lim: 20 exec/s: 0 rss: 74Mb L: 13/14 MS: 1 ChangeBit- 00:07:12.305 #25 NEW cov: 12341 ft: 14197 corp: 12/141b lim: 20 exec/s: 0 rss: 74Mb L: 13/14 MS: 1 ChangeBit- 00:07:12.305 #26 NEW cov: 12341 ft: 14216 corp: 13/154b lim: 20 exec/s: 0 rss: 74Mb L: 13/14 MS: 1 ChangeByte- 00:07:12.305 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:12.305 #27 NEW cov: 12364 ft: 14272 corp: 14/167b lim: 20 exec/s: 0 rss: 74Mb L: 13/14 MS: 1 ChangeBinInt- 00:07:12.564 #28 NEW cov: 12381 ft: 14477 corp: 15/185b lim: 20 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 CopyPart- 00:07:12.564 #29 NEW cov: 12381 ft: 14496 corp: 16/198b lim: 20 exec/s: 29 rss: 74Mb L: 13/18 MS: 1 ChangeByte- 00:07:12.564 #30 NEW cov: 12381 ft: 14561 corp: 17/212b lim: 20 exec/s: 30 rss: 75Mb L: 14/18 MS: 1 InsertByte- 00:07:12.564 #31 NEW cov: 12381 ft: 14577 corp: 18/222b lim: 20 exec/s: 31 rss: 75Mb L: 10/18 MS: 1 CrossOver- 00:07:12.564 #32 NEW cov: 12381 ft: 14600 corp: 19/236b lim: 20 exec/s: 32 rss: 75Mb L: 14/18 MS: 1 InsertByte- 00:07:12.823 #33 NEW cov: 12381 ft: 14670 corp: 20/249b lim: 20 exec/s: 33 rss: 75Mb L: 13/18 MS: 1 ShuffleBytes- 00:07:12.823 #34 NEW cov: 12381 ft: 14680 corp: 21/262b lim: 20 exec/s: 34 rss: 75Mb L: 13/18 MS: 1 ShuffleBytes- 00:07:12.823 #35 NEW cov: 12381 ft: 14712 corp: 22/281b lim: 20 exec/s: 35 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:12.823 #36 NEW cov: 12381 ft: 14749 corp: 23/294b lim: 20 exec/s: 36 rss: 75Mb L: 13/19 MS: 1 ShuffleBytes- 00:07:12.823 #37 NEW cov: 12381 ft: 14773 corp: 24/309b lim: 20 exec/s: 37 rss: 75Mb L: 15/19 MS: 1 InsertByte- 00:07:13.110 #38 NEW cov: 12381 ft: 14794 corp: 25/326b lim: 20 exec/s: 38 rss: 75Mb L: 17/19 MS: 1 PersAutoDict- DE: "\3774\3455\246\231\226\350"- 00:07:13.110 NEW_FUNC[1/4]: 0x1363948 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:07:13.110 NEW_FUNC[2/4]: 0x13644c8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:07:13.110 #39 NEW cov: 12464 ft: 14906 corp: 26/344b lim: 20 exec/s: 39 rss: 75Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:07:13.110 #40 NEW cov: 12464 ft: 14938 corp: 27/353b lim: 20 exec/s: 40 rss: 75Mb L: 9/19 MS: 1 ChangeBit- 00:07:13.110 #41 NEW cov: 12464 ft: 15003 corp: 28/366b lim: 20 exec/s: 41 rss: 75Mb L: 13/19 MS: 1 ChangeBinInt- 00:07:13.110 #42 NEW cov: 12464 ft: 15008 corp: 29/380b lim: 20 exec/s: 42 rss: 75Mb L: 14/19 MS: 1 InsertByte- 00:07:13.110 #43 NEW cov: 12464 ft: 15024 corp: 30/394b lim: 20 exec/s: 43 rss: 75Mb L: 14/19 MS: 1 ShuffleBytes- 00:07:13.368 #44 NEW cov: 12464 ft: 15075 corp: 31/403b lim: 20 exec/s: 44 rss: 75Mb L: 9/19 MS: 1 EraseBytes- 00:07:13.368 #45 NEW cov: 12464 ft: 15128 corp: 32/419b lim: 20 exec/s: 45 rss: 75Mb L: 16/19 MS: 1 CrossOver- 00:07:13.368 #46 NEW cov: 12464 ft: 15137 corp: 33/433b lim: 20 exec/s: 46 rss: 75Mb L: 14/19 MS: 1 InsertByte- 00:07:13.369 #47 NEW cov: 12464 ft: 15141 corp: 34/446b lim: 20 exec/s: 47 rss: 75Mb L: 13/19 MS: 1 ChangeBinInt- 00:07:13.369 #48 NEW cov: 12464 ft: 15149 corp: 35/455b lim: 20 exec/s: 48 rss: 75Mb L: 9/19 MS: 1 EraseBytes- 00:07:13.369 #49 NEW cov: 12464 ft: 15164 corp: 36/468b lim: 20 exec/s: 24 rss: 75Mb L: 13/19 MS: 1 PersAutoDict- DE: "\3774\3455\246\231\226\350"- 00:07:13.369 #49 DONE cov: 12464 ft: 15164 corp: 36/468b lim: 20 exec/s: 24 rss: 75Mb 00:07:13.369 ###### Recommended dictionary. ###### 00:07:13.369 "\3774\3455\246\231\226\350" # Uses: 2 00:07:13.369 ###### End of recommended dictionary. ###### 00:07:13.369 Done 49 runs in 2 second(s) 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:13.628 22:14:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:13.628 22:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:13.628 22:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.628 22:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.628 22:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.628 22:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:13.628 [2024-10-29 22:14:33.036979] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:13.628 [2024-10-29 22:14:33.037046] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101451 ] 00:07:13.887 [2024-10-29 22:14:33.243953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.887 [2024-10-29 22:14:33.286083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.887 [2024-10-29 22:14:33.345537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.887 [2024-10-29 22:14:33.361704] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:13.887 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.887 INFO: Seed: 3309728332 00:07:13.887 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:13.887 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:13.887 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:13.887 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.887 #2 INITED exec/s: 0 rss: 66Mb 00:07:13.887 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.887 This may also happen if the target rejected all inputs we tried so far 00:07:14.145 [2024-10-29 22:14:33.417258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.145 [2024-10-29 22:14:33.417303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.403 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:14.403 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.403 #7 NEW cov: 12246 ft: 12220 corp: 2/14b lim: 35 exec/s: 0 rss: 75Mb L: 13/13 MS: 5 ChangeBit-InsertByte-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:14.403 [2024-10-29 22:14:33.758291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.403 [2024-10-29 22:14:33.758360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.403 #8 NEW cov: 12360 ft: 13028 corp: 3/27b lim: 35 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:14.403 [2024-10-29 22:14:33.828249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23172323 cdw11:a05e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.403 [2024-10-29 22:14:33.828278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.403 #9 NEW cov: 12366 ft: 13229 corp: 4/40b lim: 35 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CMP- DE: "\027\240^\3366\3455\000"- 00:07:14.403 [2024-10-29 22:14:33.888395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.403 [2024-10-29 22:14:33.888422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.403 #10 NEW cov: 12451 ft: 13606 corp: 5/53b lim: 35 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:14.662 [2024-10-29 22:14:33.928529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:33.928558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.662 #11 NEW cov: 12451 ft: 13699 corp: 6/66b lim: 35 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:07:14.662 [2024-10-29 22:14:33.968908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:33.968936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:33.968994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:33.969009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:33.969066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:33.969081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.662 #17 NEW cov: 12451 ft: 14499 corp: 7/93b lim: 35 exec/s: 0 rss: 76Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:14.662 [2024-10-29 22:14:34.029283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.029316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.029373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.029388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.029444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3eff3e3e cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.029459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.029512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.029526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.662 #18 NEW cov: 12451 ft: 14918 corp: 8/123b lim: 35 exec/s: 0 rss: 76Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:14.662 [2024-10-29 22:14:34.088909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23172323 cdw11:a05e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.088936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.662 #19 NEW cov: 12451 ft: 14946 corp: 9/136b lim: 35 exec/s: 0 rss: 76Mb L: 13/30 MS: 1 CrossOver- 00:07:14.662 [2024-10-29 22:14:34.149777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.149805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.149865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.149881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.149937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.149952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.150013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a0ffff cdw11:5ede0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.150028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.662 [2024-10-29 22:14:34.150086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ffe535 cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.662 [2024-10-29 22:14:34.150100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.662 #20 NEW cov: 12451 ft: 15078 corp: 10/171b lim: 35 exec/s: 0 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\027\240^\3366\3455\000"- 00:07:14.921 [2024-10-29 22:14:34.189301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.189329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 #21 NEW cov: 12451 ft: 15106 corp: 11/180b lim: 35 exec/s: 0 rss: 76Mb L: 9/35 MS: 1 EraseBytes- 00:07:14.921 [2024-10-29 22:14:34.230033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.230061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.230120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.230136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.230194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.230209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.230267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.230283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.230342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.230356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.921 #23 NEW cov: 12451 ft: 15122 corp: 12/215b lim: 35 exec/s: 0 rss: 76Mb L: 35/35 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:14.921 [2024-10-29 22:14:34.279814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ff230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.279842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.279900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:f0ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.279915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.279970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:23ffffff cdw11:fff00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.279985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.921 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:14.921 #24 NEW cov: 12474 ft: 15249 corp: 13/240b lim: 35 exec/s: 0 rss: 76Mb L: 25/35 MS: 1 CopyPart- 00:07:14.921 [2024-10-29 22:14:34.320286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.320321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.320378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.320394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.320450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.320465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.320521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a0ffff cdw11:5ede0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.320536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.921 [2024-10-29 22:14:34.320594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ffe535 cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.320609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.921 #25 NEW cov: 12474 ft: 15273 corp: 14/275b lim: 35 exec/s: 0 rss: 76Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:14.921 [2024-10-29 22:14:34.379781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23172323 cdw11:a05e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.379809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 #26 NEW cov: 12474 ft: 15339 corp: 15/288b lim: 35 exec/s: 0 rss: 77Mb L: 13/35 MS: 1 ChangeASCIIInt- 00:07:14.921 [2024-10-29 22:14:34.419913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:5ede17a0 cdw11:36e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.921 [2024-10-29 22:14:34.419940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.921 #30 NEW cov: 12474 ft: 15362 corp: 16/297b lim: 35 exec/s: 30 rss: 77Mb L: 9/35 MS: 4 ShuffleBytes-ChangeBinInt-ChangeByte-PersAutoDict- DE: "\027\240^\3366\3455\000"- 00:07:15.180 [2024-10-29 22:14:34.460036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:5ede17a0 cdw11:36e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.460063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.180 #31 NEW cov: 12474 ft: 15368 corp: 17/306b lim: 35 exec/s: 31 rss: 77Mb L: 9/35 MS: 1 PersAutoDict- DE: "\027\240^\3366\3455\000"- 00:07:15.180 [2024-10-29 22:14:34.500830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.500856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.500914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.500935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.500991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.501006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.501060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a0ffff cdw11:5e230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.501075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.501132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.501147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.180 #32 NEW cov: 12474 ft: 15433 corp: 18/341b lim: 35 exec/s: 32 rss: 77Mb L: 35/35 MS: 1 CopyPart- 00:07:15.180 [2024-10-29 22:14:34.560987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.561014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.561070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.561086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.561142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.561156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.561212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.561227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.561291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.561318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.180 #33 NEW cov: 12474 ft: 15468 corp: 19/376b lim: 35 exec/s: 33 rss: 77Mb L: 35/35 MS: 1 CopyPart- 00:07:15.180 [2024-10-29 22:14:34.620829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ff230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.620857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.620913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:71ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.620929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.620983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:23ffffff cdw11:fff00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.620998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.180 #34 NEW cov: 12474 ft: 15515 corp: 20/401b lim: 35 exec/s: 34 rss: 77Mb L: 25/35 MS: 1 ChangeByte- 00:07:15.180 [2024-10-29 22:14:34.681306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.681333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.681391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.681405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.681458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff8cffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.681472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.681528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.681542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.180 [2024-10-29 22:14:34.681598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.180 [2024-10-29 22:14:34.681613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.439 #35 NEW cov: 12474 ft: 15525 corp: 21/436b lim: 35 exec/s: 35 rss: 77Mb L: 35/35 MS: 1 ChangeByte- 00:07:15.439 [2024-10-29 22:14:34.740789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1ede17a0 cdw11:36e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.740816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.439 #36 NEW cov: 12474 ft: 15556 corp: 22/445b lim: 35 exec/s: 36 rss: 77Mb L: 9/35 MS: 1 ChangeBit- 00:07:15.439 [2024-10-29 22:14:34.800939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.800965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.439 #37 NEW cov: 12474 ft: 15563 corp: 23/458b lim: 35 exec/s: 37 rss: 77Mb L: 13/35 MS: 1 CMP- DE: "\021\000\000\000"- 00:07:15.439 [2024-10-29 22:14:34.841211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.841238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.439 [2024-10-29 22:14:34.841302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.841317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.439 #38 NEW cov: 12474 ft: 15768 corp: 24/472b lim: 35 exec/s: 38 rss: 77Mb L: 14/35 MS: 1 InsertByte- 00:07:15.439 [2024-10-29 22:14:34.901962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.901988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.439 [2024-10-29 22:14:34.902049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff5b cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.902067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.439 [2024-10-29 22:14:34.902125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.902139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.439 [2024-10-29 22:14:34.902194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.902209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.439 [2024-10-29 22:14:34.902263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.439 [2024-10-29 22:14:34.902279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.439 #39 NEW cov: 12474 ft: 15805 corp: 25/507b lim: 35 exec/s: 39 rss: 77Mb L: 35/35 MS: 1 ChangeByte- 00:07:15.439 [2024-10-29 22:14:34.941352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a8c cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.440 [2024-10-29 22:14:34.941379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.440 #40 NEW cov: 12474 ft: 15889 corp: 26/518b lim: 35 exec/s: 40 rss: 77Mb L: 11/35 MS: 1 CrossOver- 00:07:15.699 [2024-10-29 22:14:34.981961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:34.981988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:34.982048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:34.982063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:34.982121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:34.982136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:34.982193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:34.982208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.699 #41 NEW cov: 12474 ft: 15924 corp: 27/552b lim: 35 exec/s: 41 rss: 77Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:15.699 [2024-10-29 22:14:35.041642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.041668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.699 #42 NEW cov: 12474 ft: 15930 corp: 28/565b lim: 35 exec/s: 42 rss: 77Mb L: 13/35 MS: 1 PersAutoDict- DE: "\021\000\000\000"- 00:07:15.699 [2024-10-29 22:14:35.102533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23fe2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.102560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.102618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.102633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.102690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.102704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.102761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a0ffff cdw11:5ede0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.102775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.102834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ffe535 cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.102853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.699 #43 NEW cov: 12474 ft: 15952 corp: 29/600b lim: 35 exec/s: 43 rss: 77Mb L: 35/35 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:07:15.699 [2024-10-29 22:14:35.141923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:2cff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.141960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.699 #44 NEW cov: 12474 ft: 15972 corp: 30/610b lim: 35 exec/s: 44 rss: 77Mb L: 10/35 MS: 1 InsertByte- 00:07:15.699 [2024-10-29 22:14:35.202805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.202832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.202890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.202905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.202962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffff06 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.202977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.203029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a0ffff cdw11:5ede0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.203043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.699 [2024-10-29 22:14:35.203100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ffe535 cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.699 [2024-10-29 22:14:35.203114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.959 #45 NEW cov: 12474 ft: 15981 corp: 31/645b lim: 35 exec/s: 45 rss: 77Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:15.959 [2024-10-29 22:14:35.242199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1ede17a0 cdw11:36e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.242226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.959 #46 NEW cov: 12474 ft: 16081 corp: 32/654b lim: 35 exec/s: 46 rss: 77Mb L: 9/35 MS: 1 ChangeBinInt- 00:07:15.959 [2024-10-29 22:14:35.302351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a8c cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.302379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.959 #47 NEW cov: 12474 ft: 16111 corp: 33/665b lim: 35 exec/s: 47 rss: 77Mb L: 11/35 MS: 1 ChangeByte- 00:07:15.959 [2024-10-29 22:14:35.363134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:23ff2323 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.363161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.959 [2024-10-29 22:14:35.363219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.363233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.959 [2024-10-29 22:14:35.363289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.363309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.959 [2024-10-29 22:14:35.363366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:17a000ff cdw11:5ede0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.363381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.959 [2024-10-29 22:14:35.363439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ffe535 cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.363453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.959 #48 NEW cov: 12474 ft: 16119 corp: 34/700b lim: 35 exec/s: 48 rss: 77Mb L: 35/35 MS: 1 CopyPart- 00:07:15.959 [2024-10-29 22:14:35.402767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a8c cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.402794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.959 [2024-10-29 22:14:35.402853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.959 [2024-10-29 22:14:35.402867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.959 #49 NEW cov: 12474 ft: 16133 corp: 35/719b lim: 35 exec/s: 24 rss: 78Mb L: 19/35 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:07:15.959 #49 DONE cov: 12474 ft: 16133 corp: 35/719b lim: 35 exec/s: 24 rss: 78Mb 00:07:15.959 ###### Recommended dictionary. ###### 00:07:15.959 "\027\240^\3366\3455\000" # Uses: 3 00:07:15.959 "\021\000\000\000" # Uses: 1 00:07:15.959 "\376\377\377\377\000\000\000\000" # Uses: 0 00:07:15.959 "\000\004\000\000\000\000\000\000" # Uses: 0 00:07:15.959 ###### End of recommended dictionary. ###### 00:07:15.959 Done 49 runs in 2 second(s) 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:16.218 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.219 22:14:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:16.219 [2024-10-29 22:14:35.609389] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:16.219 [2024-10-29 22:14:35.609456] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101804 ] 00:07:16.478 [2024-10-29 22:14:35.812286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.478 [2024-10-29 22:14:35.850805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.478 [2024-10-29 22:14:35.909907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.478 [2024-10-29 22:14:35.926078] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:16.479 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.479 INFO: Seed: 1578768262 00:07:16.479 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:16.479 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:16.479 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:16.479 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.479 #2 INITED exec/s: 0 rss: 66Mb 00:07:16.479 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.479 This may also happen if the target rejected all inputs we tried so far 00:07:16.479 [2024-10-29 22:14:35.991741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.479 [2024-10-29 22:14:35.991771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.479 [2024-10-29 22:14:35.991823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.479 [2024-10-29 22:14:35.991838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.997 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:16.997 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.997 #9 NEW cov: 12240 ft: 12241 corp: 2/19b lim: 45 exec/s: 0 rss: 74Mb L: 18/18 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:16.997 [2024-10-29 22:14:36.332883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.332943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.333028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.333056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.997 #15 NEW cov: 12370 ft: 12897 corp: 3/37b lim: 45 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:16.997 [2024-10-29 22:14:36.392763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001200 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.392792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.392852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.392867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.997 #16 NEW cov: 12376 ft: 13279 corp: 4/55b lim: 45 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:16.997 [2024-10-29 22:14:36.453053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.453080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.453133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1f1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.453147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.453202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.453217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.997 #17 NEW cov: 12461 ft: 13735 corp: 5/87b lim: 45 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:16.997 [2024-10-29 22:14:36.493291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.493338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.493392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.997 [2024-10-29 22:14:36.493407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.997 [2024-10-29 22:14:36.493459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.998 [2024-10-29 22:14:36.493474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.998 [2024-10-29 22:14:36.493526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.998 [2024-10-29 22:14:36.493544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.998 #18 NEW cov: 12461 ft: 14142 corp: 6/130b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 CrossOver- 00:07:17.257 [2024-10-29 22:14:36.533423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.533450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.533508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.533523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.533577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.533591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.533647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.533662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.257 #19 NEW cov: 12461 ft: 14295 corp: 7/173b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 CopyPart- 00:07:17.257 [2024-10-29 22:14:36.593625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.593652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.593708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.593723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.593775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1fac0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.593790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.593843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.593856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.257 #20 NEW cov: 12461 ft: 14426 corp: 8/217b lim: 45 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 InsertByte- 00:07:17.257 [2024-10-29 22:14:36.653755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.653784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.653839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.653854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.653908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.653925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.257 [2024-10-29 22:14:36.653979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.653993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.257 #21 NEW cov: 12461 ft: 14458 corp: 9/258b lim: 45 exec/s: 0 rss: 74Mb L: 41/44 MS: 1 InsertRepeatedBytes- 00:07:17.257 [2024-10-29 22:14:36.693875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.257 [2024-10-29 22:14:36.693901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.258 [2024-10-29 22:14:36.693956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.693969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.258 [2024-10-29 22:14:36.694023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.694038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.258 [2024-10-29 22:14:36.694091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.694105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.258 #27 NEW cov: 12461 ft: 14528 corp: 10/301b lim: 45 exec/s: 0 rss: 74Mb L: 43/44 MS: 1 ShuffleBytes- 00:07:17.258 [2024-10-29 22:14:36.733830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.733856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.258 [2024-10-29 22:14:36.733911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.733926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.258 [2024-10-29 22:14:36.733978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.258 [2024-10-29 22:14:36.733992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.258 #28 NEW cov: 12461 ft: 14618 corp: 11/332b lim: 45 exec/s: 0 rss: 74Mb L: 31/44 MS: 1 CrossOver- 00:07:17.516 [2024-10-29 22:14:36.793698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.516 [2024-10-29 22:14:36.793725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.516 #29 NEW cov: 12461 ft: 15324 corp: 12/343b lim: 45 exec/s: 0 rss: 74Mb L: 11/44 MS: 1 EraseBytes- 00:07:17.517 [2024-10-29 22:14:36.834119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.834146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.834205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.834220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.834275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.834289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.517 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:17.517 #30 NEW cov: 12484 ft: 15368 corp: 13/373b lim: 45 exec/s: 0 rss: 74Mb L: 30/44 MS: 1 EraseBytes- 00:07:17.517 [2024-10-29 22:14:36.893990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.894016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.517 #31 NEW cov: 12484 ft: 15432 corp: 14/390b lim: 45 exec/s: 0 rss: 74Mb L: 17/44 MS: 1 EraseBytes- 00:07:17.517 [2024-10-29 22:14:36.954645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.954672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.954728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.954741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.954795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.954809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.954861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.954874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.517 #32 NEW cov: 12484 ft: 15442 corp: 15/432b lim: 45 exec/s: 32 rss: 74Mb L: 42/44 MS: 1 InsertByte- 00:07:17.517 [2024-10-29 22:14:36.994370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.994396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.517 [2024-10-29 22:14:36.994450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.517 [2024-10-29 22:14:36.994464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.517 #36 NEW cov: 12484 ft: 15470 corp: 16/451b lim: 45 exec/s: 36 rss: 74Mb L: 19/44 MS: 4 CrossOver-ChangeByte-ShuffleBytes-CrossOver- 00:07:17.776 [2024-10-29 22:14:37.054952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.054979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.055035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.055052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.055105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1ffffe cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.055120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.055173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.055187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.776 #37 NEW cov: 12484 ft: 15539 corp: 17/494b lim: 45 exec/s: 37 rss: 75Mb L: 43/44 MS: 1 ChangeBit- 00:07:17.776 [2024-10-29 22:14:37.094560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.094587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 #38 NEW cov: 12484 ft: 15591 corp: 18/506b lim: 45 exec/s: 38 rss: 75Mb L: 12/44 MS: 1 CrossOver- 00:07:17.776 [2024-10-29 22:14:37.155182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.155208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.155263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.155277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.155333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.155347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.155400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:2b001f1f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.155430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.776 #39 NEW cov: 12484 ft: 15622 corp: 19/549b lim: 45 exec/s: 39 rss: 75Mb L: 43/44 MS: 1 ChangeBinInt- 00:07:17.776 [2024-10-29 22:14:37.194972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.194999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.195054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.195069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.776 #40 NEW cov: 12484 ft: 15645 corp: 20/572b lim: 45 exec/s: 40 rss: 75Mb L: 23/44 MS: 1 InsertRepeatedBytes- 00:07:17.776 [2024-10-29 22:14:37.255473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.255499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.255562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.255577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.255630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.255645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.255698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.255712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.776 #41 NEW cov: 12484 ft: 15658 corp: 21/615b lim: 45 exec/s: 41 rss: 75Mb L: 43/44 MS: 1 ChangeBit- 00:07:17.776 [2024-10-29 22:14:37.295238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.295265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.776 [2024-10-29 22:14:37.295325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.776 [2024-10-29 22:14:37.295340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.035 #42 NEW cov: 12484 ft: 15678 corp: 22/633b lim: 45 exec/s: 42 rss: 75Mb L: 18/44 MS: 1 CopyPart- 00:07:18.035 [2024-10-29 22:14:37.335323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001200 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.335349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.335404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff630007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.335419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.035 #43 NEW cov: 12484 ft: 15729 corp: 23/652b lim: 45 exec/s: 43 rss: 75Mb L: 19/44 MS: 1 InsertByte- 00:07:18.035 [2024-10-29 22:14:37.395829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.395855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.395910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.395924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.395978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.395992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.396046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.396060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.035 #44 NEW cov: 12484 ft: 15737 corp: 24/695b lim: 45 exec/s: 44 rss: 75Mb L: 43/44 MS: 1 CopyPart- 00:07:18.035 [2024-10-29 22:14:37.435931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.435958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.436013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.035 [2024-10-29 22:14:37.436027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.035 [2024-10-29 22:14:37.436081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff1fffff cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.436096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.036 [2024-10-29 22:14:37.436150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f171f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.436164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.036 #45 NEW cov: 12484 ft: 15758 corp: 25/738b lim: 45 exec/s: 45 rss: 75Mb L: 43/44 MS: 1 ChangeBit- 00:07:18.036 [2024-10-29 22:14:37.475583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5380135 cdw11:d2690004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.475609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.036 #51 NEW cov: 12484 ft: 15818 corp: 26/747b lim: 45 exec/s: 51 rss: 75Mb L: 9/44 MS: 1 CMP- DE: "\0015\3458\322i\221\252"- 00:07:18.036 [2024-10-29 22:14:37.515866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.515893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.036 [2024-10-29 22:14:37.515948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00001200 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.515962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.036 #52 NEW cov: 12484 ft: 15876 corp: 27/767b lim: 45 exec/s: 52 rss: 75Mb L: 20/44 MS: 1 CrossOver- 00:07:18.036 [2024-10-29 22:14:37.555960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:35e5ff01 cdw11:38d20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.555987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.036 [2024-10-29 22:14:37.556045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.036 [2024-10-29 22:14:37.556060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 #53 NEW cov: 12484 ft: 15891 corp: 28/792b lim: 45 exec/s: 53 rss: 75Mb L: 25/44 MS: 1 PersAutoDict- DE: "\0015\3458\322i\221\252"- 00:07:18.295 [2024-10-29 22:14:37.616431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.616457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.616512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.616530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.616584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.616598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.616652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.616666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.295 #54 NEW cov: 12484 ft: 15901 corp: 29/830b lim: 45 exec/s: 54 rss: 75Mb L: 38/44 MS: 1 CrossOver- 00:07:18.295 [2024-10-29 22:14:37.656517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.656544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.656599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.656612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.656666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffe1ffff cdw11:dd1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.656681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.656735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.656749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.295 #55 NEW cov: 12484 ft: 15911 corp: 30/873b lim: 45 exec/s: 55 rss: 75Mb L: 43/44 MS: 1 ChangeBinInt- 00:07:18.295 [2024-10-29 22:14:37.696646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.696672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.696727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.696742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.696795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.696810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.696864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.696879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.295 #56 NEW cov: 12484 ft: 15929 corp: 31/916b lim: 45 exec/s: 56 rss: 75Mb L: 43/44 MS: 1 InsertRepeatedBytes- 00:07:18.295 [2024-10-29 22:14:37.756756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.756788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.756843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.756858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.756910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.756925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 #57 NEW cov: 12484 ft: 15957 corp: 32/947b lim: 45 exec/s: 57 rss: 76Mb L: 31/44 MS: 1 CMP- DE: "\000\000\000\000\001\000\000\000"- 00:07:18.295 [2024-10-29 22:14:37.817056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.817083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.817139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.817154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.817209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.817224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 [2024-10-29 22:14:37.817279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.295 [2024-10-29 22:14:37.817293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.555 #58 NEW cov: 12484 ft: 15974 corp: 33/989b lim: 45 exec/s: 58 rss: 76Mb L: 42/44 MS: 1 ChangeBinInt- 00:07:18.555 [2024-10-29 22:14:37.876962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.876988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.555 [2024-10-29 22:14:37.877042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.877057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.555 [2024-10-29 22:14:37.877111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.877125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.555 #59 NEW cov: 12484 ft: 15997 corp: 34/1022b lim: 45 exec/s: 59 rss: 76Mb L: 33/44 MS: 1 InsertRepeatedBytes- 00:07:18.555 [2024-10-29 22:14:37.917261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.917287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.555 [2024-10-29 22:14:37.917349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fffbffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.917367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.555 [2024-10-29 22:14:37.917421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.917435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.555 [2024-10-29 22:14:37.917488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.917503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.555 #60 NEW cov: 12484 ft: 16021 corp: 35/1060b lim: 45 exec/s: 60 rss: 76Mb L: 38/44 MS: 1 ChangeBit- 00:07:18.555 [2024-10-29 22:14:37.976947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.555 [2024-10-29 22:14:37.976973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.555 #61 NEW cov: 12484 ft: 16044 corp: 36/1075b lim: 45 exec/s: 30 rss: 76Mb L: 15/44 MS: 1 EraseBytes- 00:07:18.555 #61 DONE cov: 12484 ft: 16044 corp: 36/1075b lim: 45 exec/s: 30 rss: 76Mb 00:07:18.555 ###### Recommended dictionary. ###### 00:07:18.555 "\0015\3458\322i\221\252" # Uses: 1 00:07:18.555 "\000\000\000\000\001\000\000\000" # Uses: 0 00:07:18.555 ###### End of recommended dictionary. ###### 00:07:18.555 Done 61 runs in 2 second(s) 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.815 22:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:18.815 [2024-10-29 22:14:38.155306] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:18.815 [2024-10-29 22:14:38.155377] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102096 ] 00:07:19.074 [2024-10-29 22:14:38.358684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.074 [2024-10-29 22:14:38.397877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.074 [2024-10-29 22:14:38.457013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.074 [2024-10-29 22:14:38.473181] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:19.074 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.074 INFO: Seed: 4124770557 00:07:19.074 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:19.074 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:19.074 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:19.074 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.074 #2 INITED exec/s: 0 rss: 66Mb 00:07:19.074 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.074 This may also happen if the target rejected all inputs we tried so far 00:07:19.074 [2024-10-29 22:14:38.550330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:19.074 [2024-10-29 22:14:38.550365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.642 NEW_FUNC[1/714]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:19.642 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.642 #4 NEW cov: 12157 ft: 12142 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 2 ChangeByte-CrossOver- 00:07:19.642 [2024-10-29 22:14:38.891273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:19.642 [2024-10-29 22:14:38.891314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.643 #5 NEW cov: 12287 ft: 12772 corp: 3/5b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 CopyPart- 00:07:19.643 [2024-10-29 22:14:38.941822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.643 [2024-10-29 22:14:38.941849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.643 [2024-10-29 22:14:38.941940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000007d cdw11:00000000 00:07:19.643 [2024-10-29 22:14:38.941955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.643 #9 NEW cov: 12293 ft: 13125 corp: 4/9b lim: 10 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ChangeByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:19.643 [2024-10-29 22:14:38.991569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e81 cdw11:00000000 00:07:19.643 [2024-10-29 22:14:38.991595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.643 #10 NEW cov: 12378 ft: 13315 corp: 5/12b lim: 10 exec/s: 0 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:07:19.643 [2024-10-29 22:14:39.062011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e81 cdw11:00000000 00:07:19.643 [2024-10-29 22:14:39.062036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.643 #11 NEW cov: 12378 ft: 13369 corp: 6/14b lim: 10 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 EraseBytes- 00:07:19.643 [2024-10-29 22:14:39.132527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000fa cdw11:00000000 00:07:19.643 [2024-10-29 22:14:39.132551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.643 [2024-10-29 22:14:39.132640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff7d cdw11:00000000 00:07:19.643 [2024-10-29 22:14:39.132655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.902 #12 NEW cov: 12378 ft: 13458 corp: 7/18b lim: 10 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:19.902 [2024-10-29 22:14:39.202435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.202459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.902 #13 NEW cov: 12378 ft: 13504 corp: 8/20b lim: 10 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 ChangeByte- 00:07:19.902 [2024-10-29 22:14:39.272701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.272727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.902 #14 NEW cov: 12378 ft: 13590 corp: 9/23b lim: 10 exec/s: 0 rss: 74Mb L: 3/4 MS: 1 CopyPart- 00:07:19.902 [2024-10-29 22:14:39.323899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.323925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.902 [2024-10-29 22:14:39.324012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.324029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.902 [2024-10-29 22:14:39.324117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.324131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.902 [2024-10-29 22:14:39.324216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000007d cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.324231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.902 #15 NEW cov: 12378 ft: 13873 corp: 10/31b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:19.902 [2024-10-29 22:14:39.373492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a27 cdw11:00000000 00:07:19.902 [2024-10-29 22:14:39.373519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.902 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:19.902 #21 NEW cov: 12401 ft: 13964 corp: 11/33b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 1 ChangeBinInt- 00:07:20.161 [2024-10-29 22:14:39.453599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000017 cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.453630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.161 #23 NEW cov: 12401 ft: 13994 corp: 12/36b lim: 10 exec/s: 0 rss: 74Mb L: 3/8 MS: 2 EraseBytes-CMP- DE: "\000\027"- 00:07:20.161 [2024-10-29 22:14:39.504914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.504941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.505031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.505046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.505116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005dff cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.505130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.505209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.505224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.161 #24 NEW cov: 12401 ft: 14016 corp: 13/45b lim: 10 exec/s: 24 rss: 74Mb L: 9/9 MS: 1 InsertByte- 00:07:20.161 [2024-10-29 22:14:39.575197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.575223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.575303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.575319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.575404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.575419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.575506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000003f cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.575521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.161 #25 NEW cov: 12401 ft: 14021 corp: 14/53b lim: 10 exec/s: 25 rss: 74Mb L: 8/9 MS: 1 ChangeByte- 00:07:20.161 [2024-10-29 22:14:39.624795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000fa cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.624821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.161 [2024-10-29 22:14:39.624910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a7d cdw11:00000000 00:07:20.161 [2024-10-29 22:14:39.624925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.161 #26 NEW cov: 12401 ft: 14109 corp: 15/57b lim: 10 exec/s: 26 rss: 74Mb L: 4/9 MS: 1 CrossOver- 00:07:20.419 [2024-10-29 22:14:39.695402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.419 [2024-10-29 22:14:39.695429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.419 [2024-10-29 22:14:39.695514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:20.419 [2024-10-29 22:14:39.695530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.419 [2024-10-29 22:14:39.695623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000009ff cdw11:00000000 00:07:20.419 [2024-10-29 22:14:39.695639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.419 [2024-10-29 22:14:39.695726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.419 [2024-10-29 22:14:39.695743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.419 #27 NEW cov: 12401 ft: 14128 corp: 16/66b lim: 10 exec/s: 27 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:20.420 [2024-10-29 22:14:39.765621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.765647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.420 [2024-10-29 22:14:39.765735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.765750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.420 #28 NEW cov: 12401 ft: 14173 corp: 17/71b lim: 10 exec/s: 28 rss: 74Mb L: 5/9 MS: 1 InsertByte- 00:07:20.420 [2024-10-29 22:14:39.816199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.816224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.420 [2024-10-29 22:14:39.816318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.816333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.420 [2024-10-29 22:14:39.816413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff20 cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.816426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.420 [2024-10-29 22:14:39.816514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000003f cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.816528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.420 #29 NEW cov: 12401 ft: 14187 corp: 18/79b lim: 10 exec/s: 29 rss: 75Mb L: 8/9 MS: 1 ChangeBit- 00:07:20.420 [2024-10-29 22:14:39.886277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000fa cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.886306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.420 [2024-10-29 22:14:39.886409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a7d cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.886424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.420 #32 NEW cov: 12401 ft: 14224 corp: 19/84b lim: 10 exec/s: 32 rss: 75Mb L: 5/9 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:07:20.420 [2024-10-29 22:14:39.936492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000017 cdw11:00000000 00:07:20.420 [2024-10-29 22:14:39.936515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.679 #33 NEW cov: 12401 ft: 14265 corp: 20/87b lim: 10 exec/s: 33 rss: 75Mb L: 3/9 MS: 1 PersAutoDict- DE: "\000\027"- 00:07:20.679 [2024-10-29 22:14:39.987572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:39.987597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:39.987697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:39.987714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:39.987803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff20 cdw11:00000000 00:07:20.679 [2024-10-29 22:14:39.987817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:39.987902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000403f cdw11:00000000 00:07:20.679 [2024-10-29 22:14:39.987917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.679 #34 NEW cov: 12401 ft: 14288 corp: 21/95b lim: 10 exec/s: 34 rss: 75Mb L: 8/9 MS: 1 ChangeBit- 00:07:20.679 [2024-10-29 22:14:40.058741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.058768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:40.058868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.058884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:40.058966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.058983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:40.059070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.059084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.679 [2024-10-29 22:14:40.059178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.059194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.679 #35 NEW cov: 12401 ft: 14425 corp: 22/105b lim: 10 exec/s: 35 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:20.679 [2024-10-29 22:14:40.107239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000017 cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.107264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.679 #36 NEW cov: 12401 ft: 14443 corp: 23/108b lim: 10 exec/s: 36 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:07:20.679 [2024-10-29 22:14:40.177525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e91 cdw11:00000000 00:07:20.679 [2024-10-29 22:14:40.177550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.679 #37 NEW cov: 12401 ft: 14466 corp: 24/111b lim: 10 exec/s: 37 rss: 75Mb L: 3/10 MS: 1 ChangeBit- 00:07:20.939 [2024-10-29 22:14:40.227730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.227755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.939 #38 NEW cov: 12401 ft: 14478 corp: 25/114b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:07:20.939 [2024-10-29 22:14:40.277850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e81 cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.277875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.939 #39 NEW cov: 12401 ft: 14485 corp: 26/117b lim: 10 exec/s: 39 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:07:20.939 [2024-10-29 22:14:40.348313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000fa cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.348338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.348420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff3d cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.348435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.939 #40 NEW cov: 12401 ft: 14496 corp: 27/121b lim: 10 exec/s: 40 rss: 75Mb L: 4/10 MS: 1 ChangeBit- 00:07:20.939 [2024-10-29 22:14:40.399128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.399153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.399254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff31 cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.399270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.399359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005dff cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.399373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.399466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.399481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.939 #41 NEW cov: 12401 ft: 14521 corp: 28/130b lim: 10 exec/s: 41 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:07:20.939 [2024-10-29 22:14:40.449053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.449079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.449170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003030 cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.449186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.939 [2024-10-29 22:14:40.449275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000302f cdw11:00000000 00:07:20.939 [2024-10-29 22:14:40.449289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.199 #42 NEW cov: 12401 ft: 14643 corp: 29/136b lim: 10 exec/s: 42 rss: 75Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:21.199 [2024-10-29 22:14:40.518714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e91 cdw11:00000000 00:07:21.199 [2024-10-29 22:14:40.518741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.199 #43 NEW cov: 12401 ft: 14675 corp: 30/139b lim: 10 exec/s: 21 rss: 75Mb L: 3/10 MS: 1 CrossOver- 00:07:21.199 #43 DONE cov: 12401 ft: 14675 corp: 30/139b lim: 10 exec/s: 21 rss: 75Mb 00:07:21.199 ###### Recommended dictionary. ###### 00:07:21.199 "\000\027" # Uses: 1 00:07:21.199 ###### End of recommended dictionary. ###### 00:07:21.199 Done 43 runs in 2 second(s) 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.199 22:14:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:21.199 [2024-10-29 22:14:40.714760] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:21.199 [2024-10-29 22:14:40.714828] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102393 ] 00:07:21.458 [2024-10-29 22:14:40.918341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.458 [2024-10-29 22:14:40.958239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.717 [2024-10-29 22:14:41.017961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.717 [2024-10-29 22:14:41.034129] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:21.717 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.717 INFO: Seed: 2390805736 00:07:21.717 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:21.717 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:21.717 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:21.717 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.717 #2 INITED exec/s: 0 rss: 66Mb 00:07:21.717 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.717 This may also happen if the target rejected all inputs we tried so far 00:07:21.717 [2024-10-29 22:14:41.089605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:07:21.717 [2024-10-29 22:14:41.089636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.976 NEW_FUNC[1/714]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:21.976 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:21.976 #4 NEW cov: 12175 ft: 12165 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 2 ChangeBit-CrossOver- 00:07:21.976 [2024-10-29 22:14:41.430814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af8 cdw11:00000000 00:07:21.976 [2024-10-29 22:14:41.430879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.976 [2024-10-29 22:14:41.430963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:21.976 [2024-10-29 22:14:41.430990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.976 #6 NEW cov: 12288 ft: 13064 corp: 3/7b lim: 10 exec/s: 0 rss: 74Mb L: 4/4 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:21.976 [2024-10-29 22:14:41.480533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0a cdw11:00000000 00:07:21.976 [2024-10-29 22:14:41.480562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.235 #7 NEW cov: 12294 ft: 13375 corp: 4/9b lim: 10 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:07:22.235 [2024-10-29 22:14:41.521005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af8 cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.521033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.235 [2024-10-29 22:14:41.521085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.521099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.235 [2024-10-29 22:14:41.521150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.521164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.235 [2024-10-29 22:14:41.521216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fff8 cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.521230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.235 #8 NEW cov: 12379 ft: 13932 corp: 5/18b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:22.235 [2024-10-29 22:14:41.580775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0a cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.580803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.235 #9 NEW cov: 12379 ft: 14057 corp: 6/21b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:07:22.235 [2024-10-29 22:14:41.640915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.640942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.235 #11 NEW cov: 12379 ft: 14190 corp: 7/24b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 2 ShuffleBytes-CrossOver- 00:07:22.235 [2024-10-29 22:14:41.681433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.681460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.235 [2024-10-29 22:14:41.681513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.235 [2024-10-29 22:14:41.681528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.236 [2024-10-29 22:14:41.681580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.681595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.236 [2024-10-29 22:14:41.681652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.681666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.236 #13 NEW cov: 12379 ft: 14239 corp: 8/33b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:22.236 [2024-10-29 22:14:41.741571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.741600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.236 [2024-10-29 22:14:41.741654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.741670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.236 [2024-10-29 22:14:41.741725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.741739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.236 [2024-10-29 22:14:41.741790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.236 [2024-10-29 22:14:41.741804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.495 #14 NEW cov: 12379 ft: 14266 corp: 9/42b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:22.495 [2024-10-29 22:14:41.801449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0e cdw11:00000000 00:07:22.495 [2024-10-29 22:14:41.801478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.495 #15 NEW cov: 12379 ft: 14379 corp: 10/44b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:07:22.495 [2024-10-29 22:14:41.841565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0a cdw11:00000000 00:07:22.495 [2024-10-29 22:14:41.841594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.495 #16 NEW cov: 12379 ft: 14416 corp: 11/47b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:07:22.495 [2024-10-29 22:14:41.901718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b40e cdw11:00000000 00:07:22.495 [2024-10-29 22:14:41.901746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.495 #17 NEW cov: 12379 ft: 14526 corp: 12/49b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:07:22.495 [2024-10-29 22:14:41.961832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009af8 cdw11:00000000 00:07:22.495 [2024-10-29 22:14:41.961860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.495 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:22.495 #18 NEW cov: 12402 ft: 14559 corp: 13/51b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 EraseBytes- 00:07:22.495 [2024-10-29 22:14:42.002077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:22.495 [2024-10-29 22:14:42.002105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.495 [2024-10-29 22:14:42.002161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af2 cdw11:00000000 00:07:22.495 [2024-10-29 22:14:42.002177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.754 #19 NEW cov: 12402 ft: 14665 corp: 14/55b lim: 10 exec/s: 0 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:07:22.754 [2024-10-29 22:14:42.062121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a2e cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.062149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.754 #20 NEW cov: 12402 ft: 14699 corp: 15/57b lim: 10 exec/s: 20 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:07:22.754 [2024-10-29 22:14:42.102337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6d cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.102364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.754 [2024-10-29 22:14:42.102419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af2 cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.102434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.754 #21 NEW cov: 12402 ft: 14706 corp: 16/61b lim: 10 exec/s: 21 rss: 75Mb L: 4/9 MS: 1 ChangeByte- 00:07:22.754 [2024-10-29 22:14:42.162418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ea5 cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.162446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.754 #24 NEW cov: 12402 ft: 14745 corp: 17/63b lim: 10 exec/s: 24 rss: 75Mb L: 2/9 MS: 3 ChangeByte-CrossOver-InsertByte- 00:07:22.754 [2024-10-29 22:14:42.202523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b40e cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.202551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.754 #25 NEW cov: 12402 ft: 14779 corp: 18/65b lim: 10 exec/s: 25 rss: 75Mb L: 2/9 MS: 1 CopyPart- 00:07:22.754 [2024-10-29 22:14:42.262837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0a cdw11:00000000 00:07:22.754 [2024-10-29 22:14:42.262864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.755 [2024-10-29 22:14:42.262918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009af8 cdw11:00000000 00:07:22.755 [2024-10-29 22:14:42.262933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.014 #26 NEW cov: 12402 ft: 14794 corp: 19/69b lim: 10 exec/s: 26 rss: 75Mb L: 4/9 MS: 1 CrossOver- 00:07:23.014 [2024-10-29 22:14:42.302897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df0e cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.302927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.014 #27 NEW cov: 12402 ft: 14800 corp: 20/71b lim: 10 exec/s: 27 rss: 75Mb L: 2/9 MS: 1 ChangeByte- 00:07:23.014 [2024-10-29 22:14:42.343061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.343088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.343142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff8 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.343157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.014 #28 NEW cov: 12402 ft: 14844 corp: 21/75b lim: 10 exec/s: 28 rss: 75Mb L: 4/9 MS: 1 CrossOver- 00:07:23.014 [2024-10-29 22:14:42.403350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a00 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.403382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.403436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.403450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.403502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.403517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.014 #29 NEW cov: 12402 ft: 14989 corp: 22/82b lim: 10 exec/s: 29 rss: 75Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:07:23.014 [2024-10-29 22:14:42.463392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000de6d cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.463418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.463471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af2 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.463486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.014 #30 NEW cov: 12402 ft: 15057 corp: 23/86b lim: 10 exec/s: 30 rss: 75Mb L: 4/9 MS: 1 ChangeByte- 00:07:23.014 [2024-10-29 22:14:42.523939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df00 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.523965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.524018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.524033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.524082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.524097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.524150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.524164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.014 [2024-10-29 22:14:42.524216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:23.014 [2024-10-29 22:14:42.524231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.273 #31 NEW cov: 12402 ft: 15105 corp: 24/96b lim: 10 exec/s: 31 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:23.273 [2024-10-29 22:14:42.583747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ade cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.583774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.583829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006d0a cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.583844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.273 #32 NEW cov: 12402 ft: 15186 corp: 25/101b lim: 10 exec/s: 32 rss: 75Mb L: 5/10 MS: 1 CopyPart- 00:07:23.273 [2024-10-29 22:14:42.643892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ade cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.643924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.643980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a6d cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.643995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.273 #33 NEW cov: 12402 ft: 15195 corp: 26/106b lim: 10 exec/s: 33 rss: 75Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:23.273 [2024-10-29 22:14:42.704041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000de6d cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.704067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.704121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af2 cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.704136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.273 #34 NEW cov: 12402 ft: 15243 corp: 27/111b lim: 10 exec/s: 34 rss: 75Mb L: 5/10 MS: 1 InsertByte- 00:07:23.273 [2024-10-29 22:14:42.744586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df00 cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.744612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.744666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.744680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.744735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.744750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.744802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.744816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.273 [2024-10-29 22:14:42.744872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:23.273 [2024-10-29 22:14:42.744885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.273 #35 NEW cov: 12402 ft: 15247 corp: 28/121b lim: 10 exec/s: 35 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:07:23.532 [2024-10-29 22:14:42.804246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:07:23.532 [2024-10-29 22:14:42.804274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.532 #37 NEW cov: 12402 ft: 15309 corp: 29/123b lim: 10 exec/s: 37 rss: 75Mb L: 2/10 MS: 2 EraseBytes-CopyPart- 00:07:23.532 [2024-10-29 22:14:42.844470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6d cdw11:00000000 00:07:23.532 [2024-10-29 22:14:42.844497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.532 [2024-10-29 22:14:42.844551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000210a cdw11:00000000 00:07:23.532 [2024-10-29 22:14:42.844565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.532 #38 NEW cov: 12402 ft: 15317 corp: 30/128b lim: 10 exec/s: 38 rss: 75Mb L: 5/10 MS: 1 InsertByte- 00:07:23.532 [2024-10-29 22:14:42.884465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df0e cdw11:00000000 00:07:23.532 [2024-10-29 22:14:42.884492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.533 #39 NEW cov: 12402 ft: 15325 corp: 31/130b lim: 10 exec/s: 39 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:23.533 [2024-10-29 22:14:42.924611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009a0e cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.924639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.533 #40 NEW cov: 12402 ft: 15343 corp: 32/132b lim: 10 exec/s: 40 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:07:23.533 [2024-10-29 22:14:42.965189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af8 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.965216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:42.965271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff8 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.965285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:42.965343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.965358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:42.965410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.965423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:42.965479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:42.965493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.533 #41 NEW cov: 12402 ft: 15351 corp: 33/142b lim: 10 exec/s: 41 rss: 76Mb L: 10/10 MS: 1 CopyPart- 00:07:23.533 [2024-10-29 22:14:43.025371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df08 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:43.025398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:43.025451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:43.025465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:43.025518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:43.025532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:43.025584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.533 [2024-10-29 22:14:43.025597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.533 [2024-10-29 22:14:43.025649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:23.533 [2024-10-29 22:14:43.025663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.792 #42 NEW cov: 12402 ft: 15355 corp: 34/152b lim: 10 exec/s: 42 rss: 76Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:23.792 [2024-10-29 22:14:43.085143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:23.792 [2024-10-29 22:14:43.085170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.792 [2024-10-29 22:14:43.085222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ef2 cdw11:00000000 00:07:23.792 [2024-10-29 22:14:43.085235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.792 #43 NEW cov: 12402 ft: 15363 corp: 35/156b lim: 10 exec/s: 21 rss: 76Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:23.792 #43 DONE cov: 12402 ft: 15363 corp: 35/156b lim: 10 exec/s: 21 rss: 76Mb 00:07:23.792 Done 43 runs in 2 second(s) 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.792 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.793 22:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:23.793 [2024-10-29 22:14:43.257340] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:23.793 [2024-10-29 22:14:43.257408] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102729 ] 00:07:24.051 [2024-10-29 22:14:43.463240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.051 [2024-10-29 22:14:43.502221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.051 [2024-10-29 22:14:43.561692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.310 [2024-10-29 22:14:43.577942] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:24.310 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.310 INFO: Seed: 639837743 00:07:24.310 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:24.310 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:24.310 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:24.311 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.311 [2024-10-29 22:14:43.645159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.645195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.311 #2 INITED cov: 12203 ft: 12202 corp: 1/1b exec/s: 0 rss: 73Mb 00:07:24.311 [2024-10-29 22:14:43.696936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.696962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.311 [2024-10-29 22:14:43.697046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.697061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.311 [2024-10-29 22:14:43.697154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.697169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.311 [2024-10-29 22:14:43.697261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.697275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.311 [2024-10-29 22:14:43.697369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.697384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.311 #3 NEW cov: 12316 ft: 13570 corp: 2/6b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:24.311 [2024-10-29 22:14:43.765794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.765820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.311 #4 NEW cov: 12322 ft: 13793 corp: 3/7b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:24.311 [2024-10-29 22:14:43.816160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.311 [2024-10-29 22:14:43.816188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.570 #5 NEW cov: 12407 ft: 14067 corp: 4/8b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:07:24.570 [2024-10-29 22:14:43.886712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:43.886740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.570 #6 NEW cov: 12407 ft: 14135 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:24.570 [2024-10-29 22:14:43.937048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:43.937077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.570 #7 NEW cov: 12407 ft: 14241 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:24.570 [2024-10-29 22:14:44.007483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.007508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.570 #8 NEW cov: 12407 ft: 14298 corp: 7/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:24.570 [2024-10-29 22:14:44.059282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.059310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.570 [2024-10-29 22:14:44.059403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.059426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.570 [2024-10-29 22:14:44.059537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.059552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.570 [2024-10-29 22:14:44.059647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.059662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.570 [2024-10-29 22:14:44.059756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.570 [2024-10-29 22:14:44.059773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.570 #9 NEW cov: 12407 ft: 14327 corp: 8/16b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:24.830 [2024-10-29 22:14:44.108070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.108096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.830 #10 NEW cov: 12407 ft: 14360 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:24.830 [2024-10-29 22:14:44.179787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.179814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.179921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.179936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.180035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.180051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.180141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.180160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.180253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.180268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.830 #11 NEW cov: 12407 ft: 14406 corp: 10/22b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:24.830 [2024-10-29 22:14:44.249996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.250023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.250131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.250146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.250236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.250250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.250350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.250365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.250454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.250469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.830 #12 NEW cov: 12407 ft: 14543 corp: 11/27b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:24.830 [2024-10-29 22:14:44.300043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.300069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.300184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.300200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.300293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.300311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.300413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.300428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.830 [2024-10-29 22:14:44.300513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.830 [2024-10-29 22:14:44.300531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.830 #13 NEW cov: 12407 ft: 14601 corp: 12/32b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:25.089 [2024-10-29 22:14:44.359221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.359247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.089 [2024-10-29 22:14:44.359343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.359359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.089 #14 NEW cov: 12407 ft: 14826 corp: 13/34b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:25.089 [2024-10-29 22:14:44.429033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.429058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.089 #15 NEW cov: 12407 ft: 14863 corp: 14/35b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:25.089 [2024-10-29 22:14:44.480918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.480943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.089 [2024-10-29 22:14:44.481037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.481051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.089 [2024-10-29 22:14:44.481138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.481152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.089 [2024-10-29 22:14:44.481237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.481252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.089 [2024-10-29 22:14:44.481352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.089 [2024-10-29 22:14:44.481367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.348 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:25.348 #16 NEW cov: 12430 ft: 14891 corp: 15/40b lim: 5 exec/s: 16 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:25.348 [2024-10-29 22:14:44.830940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.348 [2024-10-29 22:14:44.830980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 #17 NEW cov: 12430 ft: 14929 corp: 16/41b lim: 5 exec/s: 17 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:25.607 [2024-10-29 22:14:44.903044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.903078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:44.903179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.903195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:44.903294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.903314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:44.903421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.903437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:44.903533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.903549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.607 #18 NEW cov: 12430 ft: 14951 corp: 17/46b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:25.607 [2024-10-29 22:14:44.952030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.952059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:44.952154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:44.952169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.607 #19 NEW cov: 12430 ft: 14988 corp: 18/48b lim: 5 exec/s: 19 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:25.607 [2024-10-29 22:14:45.001946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.001973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 #20 NEW cov: 12430 ft: 15012 corp: 19/49b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:25.607 [2024-10-29 22:14:45.053542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.053568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:45.053658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.053673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:45.053769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.053784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:45.053888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.053902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:45.053995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.054010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.607 #21 NEW cov: 12430 ft: 15056 corp: 20/54b lim: 5 exec/s: 21 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:25.607 [2024-10-29 22:14:45.122957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.122981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.607 [2024-10-29 22:14:45.123070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.607 [2024-10-29 22:14:45.123084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.866 #22 NEW cov: 12430 ft: 15081 corp: 21/56b lim: 5 exec/s: 22 rss: 75Mb L: 2/5 MS: 1 CrossOver- 00:07:25.866 [2024-10-29 22:14:45.193627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.193653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.866 [2024-10-29 22:14:45.193748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.193763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.866 #23 NEW cov: 12430 ft: 15099 corp: 22/58b lim: 5 exec/s: 23 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:25.866 [2024-10-29 22:14:45.264974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.264999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.866 [2024-10-29 22:14:45.265100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.265114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.866 [2024-10-29 22:14:45.265217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.265231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.866 [2024-10-29 22:14:45.265331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.265347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.866 [2024-10-29 22:14:45.265449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.866 [2024-10-29 22:14:45.265466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.866 #24 NEW cov: 12430 ft: 15107 corp: 23/63b lim: 5 exec/s: 24 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:07:25.867 [2024-10-29 22:14:45.334789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.867 [2024-10-29 22:14:45.334814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.867 [2024-10-29 22:14:45.334914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.867 [2024-10-29 22:14:45.334930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.867 [2024-10-29 22:14:45.335031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.867 [2024-10-29 22:14:45.335046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.867 [2024-10-29 22:14:45.335143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.867 [2024-10-29 22:14:45.335158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.867 #25 NEW cov: 12430 ft: 15125 corp: 24/67b lim: 5 exec/s: 25 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:07:25.867 [2024-10-29 22:14:45.383980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.867 [2024-10-29 22:14:45.384006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.126 #26 NEW cov: 12430 ft: 15174 corp: 25/68b lim: 5 exec/s: 26 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:26.126 [2024-10-29 22:14:45.434079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.434105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.126 #27 NEW cov: 12430 ft: 15195 corp: 26/69b lim: 5 exec/s: 27 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:26.126 [2024-10-29 22:14:45.504258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.504283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.126 #28 NEW cov: 12430 ft: 15196 corp: 27/70b lim: 5 exec/s: 28 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:07:26.126 [2024-10-29 22:14:45.556099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.556126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.126 [2024-10-29 22:14:45.556230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.556244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.126 [2024-10-29 22:14:45.556353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.556368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.126 [2024-10-29 22:14:45.556468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.556482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.126 [2024-10-29 22:14:45.556571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.556586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.126 #29 NEW cov: 12430 ft: 15203 corp: 28/75b lim: 5 exec/s: 29 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:07:26.126 [2024-10-29 22:14:45.604735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.126 [2024-10-29 22:14:45.604761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.126 #30 NEW cov: 12430 ft: 15218 corp: 29/76b lim: 5 exec/s: 15 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:26.126 #30 DONE cov: 12430 ft: 15218 corp: 29/76b lim: 5 exec/s: 15 rss: 75Mb 00:07:26.126 Done 30 runs in 2 second(s) 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.385 22:14:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:26.385 [2024-10-29 22:14:45.798503] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:26.386 [2024-10-29 22:14:45.798572] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103080 ] 00:07:26.645 [2024-10-29 22:14:46.005256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.645 [2024-10-29 22:14:46.047358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.645 [2024-10-29 22:14:46.107768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.645 [2024-10-29 22:14:46.123948] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:26.645 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.645 INFO: Seed: 3186873053 00:07:26.645 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:26.645 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:26.645 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:26.645 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.904 [2024-10-29 22:14:46.201193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.904 [2024-10-29 22:14:46.201227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.904 #2 INITED cov: 12203 ft: 12197 corp: 1/1b exec/s: 0 rss: 73Mb 00:07:26.904 [2024-10-29 22:14:46.251230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.904 [2024-10-29 22:14:46.251259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.904 #3 NEW cov: 12316 ft: 12660 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:07:26.904 [2024-10-29 22:14:46.321657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.904 [2024-10-29 22:14:46.321684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.904 #4 NEW cov: 12322 ft: 12997 corp: 3/3b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:26.904 [2024-10-29 22:14:46.372129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.904 [2024-10-29 22:14:46.372153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.904 [2024-10-29 22:14:46.372177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.904 [2024-10-29 22:14:46.372187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.904 #5 NEW cov: 12416 ft: 13865 corp: 4/5b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:07:27.164 [2024-10-29 22:14:46.442396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.442423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.164 [2024-10-29 22:14:46.442513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.442528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.164 #6 NEW cov: 12416 ft: 13927 corp: 5/7b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:07:27.164 [2024-10-29 22:14:46.513081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.513107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.164 [2024-10-29 22:14:46.513189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.513207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.164 [2024-10-29 22:14:46.513301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.513315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.164 #7 NEW cov: 12416 ft: 14145 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:27.164 [2024-10-29 22:14:46.582573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.582600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.164 #8 NEW cov: 12416 ft: 14264 corp: 7/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 ShuffleBytes- 00:07:27.164 [2024-10-29 22:14:46.632765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.632789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.164 #9 NEW cov: 12416 ft: 14343 corp: 8/12b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 EraseBytes- 00:07:27.164 [2024-10-29 22:14:46.683313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.683339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.164 [2024-10-29 22:14:46.683428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.164 [2024-10-29 22:14:46.683442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.423 #10 NEW cov: 12416 ft: 14355 corp: 9/14b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CrossOver- 00:07:27.423 [2024-10-29 22:14:46.733172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.423 [2024-10-29 22:14:46.733196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.423 #11 NEW cov: 12416 ft: 14391 corp: 10/15b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 CrossOver- 00:07:27.423 [2024-10-29 22:14:46.783315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.423 [2024-10-29 22:14:46.783341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.423 #12 NEW cov: 12416 ft: 14477 corp: 11/16b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 EraseBytes- 00:07:27.423 [2024-10-29 22:14:46.834196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.423 [2024-10-29 22:14:46.834221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.423 [2024-10-29 22:14:46.834312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.423 [2024-10-29 22:14:46.834327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.423 [2024-10-29 22:14:46.834417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.423 [2024-10-29 22:14:46.834434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.423 #13 NEW cov: 12416 ft: 14505 corp: 12/19b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:27.424 [2024-10-29 22:14:46.883567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.424 [2024-10-29 22:14:46.883590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.424 #14 NEW cov: 12416 ft: 14581 corp: 13/20b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 CrossOver- 00:07:27.683 [2024-10-29 22:14:46.954705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:46.954730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.683 [2024-10-29 22:14:46.954826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:46.954841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.683 [2024-10-29 22:14:46.954929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:46.954945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.683 #15 NEW cov: 12416 ft: 14621 corp: 14/23b lim: 5 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:27.683 [2024-10-29 22:14:47.024589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:47.024614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.683 [2024-10-29 22:14:47.024704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:47.024720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.683 #16 NEW cov: 12416 ft: 14630 corp: 15/25b lim: 5 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 CopyPart- 00:07:27.683 [2024-10-29 22:14:47.074858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:47.074883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.683 [2024-10-29 22:14:47.074972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.683 [2024-10-29 22:14:47.074987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.942 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:27.943 #17 NEW cov: 12439 ft: 14660 corp: 16/27b lim: 5 exec/s: 17 rss: 75Mb L: 2/3 MS: 1 ChangeBit- 00:07:27.943 [2024-10-29 22:14:47.426089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.943 [2024-10-29 22:14:47.426138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.943 [2024-10-29 22:14:47.426244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.943 [2024-10-29 22:14:47.426264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.943 [2024-10-29 22:14:47.426365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.943 [2024-10-29 22:14:47.426384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.943 #18 NEW cov: 12439 ft: 14827 corp: 17/30b lim: 5 exec/s: 18 rss: 75Mb L: 3/3 MS: 1 InsertByte- 00:07:28.202 [2024-10-29 22:14:47.485710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.485742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.202 #19 NEW cov: 12439 ft: 14844 corp: 18/31b lim: 5 exec/s: 19 rss: 75Mb L: 1/3 MS: 1 CrossOver- 00:07:28.202 [2024-10-29 22:14:47.536884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.536913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.202 [2024-10-29 22:14:47.537002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.537017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.202 [2024-10-29 22:14:47.537106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.537120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.202 #20 NEW cov: 12439 ft: 14931 corp: 19/34b lim: 5 exec/s: 20 rss: 75Mb L: 3/3 MS: 1 CrossOver- 00:07:28.202 [2024-10-29 22:14:47.606918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.606946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.202 [2024-10-29 22:14:47.607037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.607051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.202 #21 NEW cov: 12439 ft: 14960 corp: 20/36b lim: 5 exec/s: 21 rss: 75Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:28.202 [2024-10-29 22:14:47.657403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.657429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.202 [2024-10-29 22:14:47.657527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.202 [2024-10-29 22:14:47.657541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.202 #22 NEW cov: 12439 ft: 14974 corp: 21/38b lim: 5 exec/s: 22 rss: 75Mb L: 2/3 MS: 1 EraseBytes- 00:07:28.461 [2024-10-29 22:14:47.727388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.727417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.461 #23 NEW cov: 12439 ft: 14985 corp: 22/39b lim: 5 exec/s: 23 rss: 75Mb L: 1/3 MS: 1 EraseBytes- 00:07:28.461 [2024-10-29 22:14:47.777493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.777519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.461 #24 NEW cov: 12439 ft: 14995 corp: 23/40b lim: 5 exec/s: 24 rss: 75Mb L: 1/3 MS: 1 EraseBytes- 00:07:28.461 [2024-10-29 22:14:47.848065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.848091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.461 #25 NEW cov: 12439 ft: 15002 corp: 24/41b lim: 5 exec/s: 25 rss: 75Mb L: 1/3 MS: 1 CopyPart- 00:07:28.461 [2024-10-29 22:14:47.918320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.918347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.461 #26 NEW cov: 12439 ft: 15010 corp: 25/42b lim: 5 exec/s: 26 rss: 75Mb L: 1/3 MS: 1 ChangeBit- 00:07:28.461 [2024-10-29 22:14:47.969111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.969137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.461 [2024-10-29 22:14:47.969240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.461 [2024-10-29 22:14:47.969255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.720 #27 NEW cov: 12439 ft: 15015 corp: 26/44b lim: 5 exec/s: 27 rss: 75Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:28.720 [2024-10-29 22:14:48.039494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.039520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.720 [2024-10-29 22:14:48.039611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.039627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.720 #28 NEW cov: 12439 ft: 15035 corp: 27/46b lim: 5 exec/s: 28 rss: 75Mb L: 2/3 MS: 1 ChangeBit- 00:07:28.720 [2024-10-29 22:14:48.109627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.109652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.720 [2024-10-29 22:14:48.109747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.109762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.720 #29 NEW cov: 12439 ft: 15042 corp: 28/48b lim: 5 exec/s: 29 rss: 75Mb L: 2/3 MS: 1 InsertByte- 00:07:28.720 [2024-10-29 22:14:48.160144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.160169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.720 [2024-10-29 22:14:48.160257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.720 [2024-10-29 22:14:48.160273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.720 #30 NEW cov: 12439 ft: 15053 corp: 29/50b lim: 5 exec/s: 15 rss: 75Mb L: 2/3 MS: 1 ChangeByte- 00:07:28.720 #30 DONE cov: 12439 ft: 15053 corp: 29/50b lim: 5 exec/s: 15 rss: 75Mb 00:07:28.720 Done 30 runs in 2 second(s) 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:28.980 22:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:28.980 [2024-10-29 22:14:48.340477] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:28.980 [2024-10-29 22:14:48.340543] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103442 ] 00:07:29.239 [2024-10-29 22:14:48.536635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.239 [2024-10-29 22:14:48.574175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.239 [2024-10-29 22:14:48.633283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.239 [2024-10-29 22:14:48.649459] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:29.239 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.239 INFO: Seed: 1417859040 00:07:29.239 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:29.239 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:29.239 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:29.239 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.239 #2 INITED exec/s: 0 rss: 68Mb 00:07:29.239 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.239 This may also happen if the target rejected all inputs we tried so far 00:07:29.239 [2024-10-29 22:14:48.726992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.239 [2024-10-29 22:14:48.727027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.239 [2024-10-29 22:14:48.727123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.239 [2024-10-29 22:14:48.727138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.757 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:29.757 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.757 #4 NEW cov: 12204 ft: 12205 corp: 2/20b lim: 40 exec/s: 0 rss: 74Mb L: 19/19 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:29.757 [2024-10-29 22:14:49.077942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.077989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.757 [2024-10-29 22:14:49.078093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.078112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.757 NEW_FUNC[1/1]: 0x19e4dc8 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:183 00:07:29.757 #5 NEW cov: 12338 ft: 12824 corp: 3/38b lim: 40 exec/s: 0 rss: 75Mb L: 18/19 MS: 1 EraseBytes- 00:07:29.757 [2024-10-29 22:14:49.158276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.158307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.757 [2024-10-29 22:14:49.158405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.158419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.757 #11 NEW cov: 12344 ft: 13068 corp: 4/60b lim: 40 exec/s: 0 rss: 75Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:07:29.757 [2024-10-29 22:14:49.228332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:31ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.228358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.757 [2024-10-29 22:14:49.228482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.757 [2024-10-29 22:14:49.228497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.757 #12 NEW cov: 12429 ft: 13362 corp: 5/83b lim: 40 exec/s: 0 rss: 75Mb L: 23/23 MS: 1 InsertByte- 00:07:30.016 [2024-10-29 22:14:49.298624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.298653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.298745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.298760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.016 #13 NEW cov: 12429 ft: 13487 corp: 6/102b lim: 40 exec/s: 0 rss: 75Mb L: 19/23 MS: 1 CrossOver- 00:07:30.016 [2024-10-29 22:14:49.349178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.349203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.349294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.349331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.349422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a0a0a cdw11:8affff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.349436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.016 #22 NEW cov: 12429 ft: 13761 corp: 7/126b lim: 40 exec/s: 0 rss: 75Mb L: 24/24 MS: 4 CopyPart-CopyPart-CrossOver-InsertRepeatedBytes- 00:07:30.016 [2024-10-29 22:14:49.398962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffdf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.398988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.399075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.399090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.016 #23 NEW cov: 12429 ft: 13819 corp: 8/145b lim: 40 exec/s: 0 rss: 75Mb L: 19/24 MS: 1 ChangeBit- 00:07:30.016 [2024-10-29 22:14:49.449093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.449117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.449217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:0000f7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.449232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.016 #24 NEW cov: 12429 ft: 13865 corp: 9/167b lim: 40 exec/s: 0 rss: 75Mb L: 22/24 MS: 1 ChangeBinInt- 00:07:30.016 [2024-10-29 22:14:49.499650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffdf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.499674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.016 [2024-10-29 22:14:49.499769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.016 [2024-10-29 22:14:49.499787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.275 #25 NEW cov: 12429 ft: 13962 corp: 10/185b lim: 40 exec/s: 0 rss: 75Mb L: 18/24 MS: 1 EraseBytes- 00:07:30.275 [2024-10-29 22:14:49.570139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.275 [2024-10-29 22:14:49.570164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.275 [2024-10-29 22:14:49.570257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.275 [2024-10-29 22:14:49.570271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.276 [2024-10-29 22:14:49.570374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a0a0a cdw11:8afffff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.276 [2024-10-29 22:14:49.570389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.276 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:30.276 #26 NEW cov: 12452 ft: 14016 corp: 11/209b lim: 40 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBinInt- 00:07:30.276 [2024-10-29 22:14:49.639770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.276 [2024-10-29 22:14:49.639794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.276 #27 NEW cov: 12452 ft: 14306 corp: 12/221b lim: 40 exec/s: 0 rss: 75Mb L: 12/24 MS: 1 EraseBytes- 00:07:30.276 [2024-10-29 22:14:49.710280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.276 [2024-10-29 22:14:49.710309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.276 #30 NEW cov: 12452 ft: 14341 corp: 13/230b lim: 40 exec/s: 30 rss: 75Mb L: 9/24 MS: 3 CopyPart-EraseBytes-CMP- DE: "\001\000\000\000\000\000\000\010"- 00:07:30.276 [2024-10-29 22:14:49.760801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.276 [2024-10-29 22:14:49.760825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.276 [2024-10-29 22:14:49.760913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.276 [2024-10-29 22:14:49.760928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.276 #31 NEW cov: 12452 ft: 14372 corp: 14/250b lim: 40 exec/s: 31 rss: 75Mb L: 20/24 MS: 1 InsertByte- 00:07:30.537 [2024-10-29 22:14:49.811051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.811077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.811174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.811189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.537 #32 NEW cov: 12452 ft: 14388 corp: 15/270b lim: 40 exec/s: 32 rss: 75Mb L: 20/24 MS: 1 InsertByte- 00:07:30.537 [2024-10-29 22:14:49.861536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.861561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.861659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.861673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.861762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a0a0a cdw11:8affebff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.861775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.537 #33 NEW cov: 12452 ft: 14416 corp: 16/295b lim: 40 exec/s: 33 rss: 75Mb L: 25/25 MS: 1 InsertByte- 00:07:30.537 [2024-10-29 22:14:49.911767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2ad7 cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.911793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.911885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.911899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.912004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a0a0a cdw11:8afffff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.912018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.537 #34 NEW cov: 12452 ft: 14448 corp: 17/319b lim: 40 exec/s: 34 rss: 75Mb L: 24/25 MS: 1 ChangeBinInt- 00:07:30.537 [2024-10-29 22:14:49.982017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.982045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.982139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.982155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:49.982251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a0a0a cdw11:8affebff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:49.982267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.537 #35 NEW cov: 12452 ft: 14515 corp: 18/344b lim: 40 exec/s: 35 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:30.537 [2024-10-29 22:14:50.052062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8aff26df cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:50.052093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.537 [2024-10-29 22:14:50.052182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.537 [2024-10-29 22:14:50.052201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.796 #36 NEW cov: 12452 ft: 14558 corp: 19/363b lim: 40 exec/s: 36 rss: 75Mb L: 19/25 MS: 1 ChangeByte- 00:07:30.796 [2024-10-29 22:14:50.102187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:31ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.102217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.102319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0000ff cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.102335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.796 #37 NEW cov: 12452 ft: 14567 corp: 20/386b lim: 40 exec/s: 37 rss: 75Mb L: 23/25 MS: 1 ShuffleBytes- 00:07:30.796 [2024-10-29 22:14:50.173033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.173061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.173164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.173179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.173269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.173285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.173396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.173413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.796 #38 NEW cov: 12452 ft: 15083 corp: 21/425b lim: 40 exec/s: 38 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:30.796 [2024-10-29 22:14:50.252814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.252852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.252998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.253028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.796 [2024-10-29 22:14:50.253149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2a2a2a0a cdw11:0a8affeb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.796 [2024-10-29 22:14:50.253170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.796 #39 NEW cov: 12452 ft: 15235 corp: 22/451b lim: 40 exec/s: 39 rss: 75Mb L: 26/39 MS: 1 InsertByte- 00:07:31.055 [2024-10-29 22:14:50.322383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.322420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.055 #40 NEW cov: 12452 ft: 15351 corp: 23/466b lim: 40 exec/s: 40 rss: 75Mb L: 15/39 MS: 1 EraseBytes- 00:07:31.055 [2024-10-29 22:14:50.373677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2a2a2a2a cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.373707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.055 [2024-10-29 22:14:50.373797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2a2affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.373813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.055 [2024-10-29 22:14:50.373903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0a0a cdw11:8afffff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.373918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.055 #41 NEW cov: 12452 ft: 15392 corp: 24/490b lim: 40 exec/s: 41 rss: 75Mb L: 24/39 MS: 1 CrossOver- 00:07:31.055 [2024-10-29 22:14:50.423443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.423470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.055 [2024-10-29 22:14:50.423580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.423596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.055 #42 NEW cov: 12452 ft: 15407 corp: 25/513b lim: 40 exec/s: 42 rss: 75Mb L: 23/39 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\010"- 00:07:31.055 [2024-10-29 22:14:50.493712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1500ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.493739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.055 [2024-10-29 22:14:50.493834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.493849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.055 #43 NEW cov: 12452 ft: 15420 corp: 26/535b lim: 40 exec/s: 43 rss: 75Mb L: 22/39 MS: 1 CMP- DE: "\025\000"- 00:07:31.055 [2024-10-29 22:14:50.543862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8a010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.543886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.055 [2024-10-29 22:14:50.543977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.055 [2024-10-29 22:14:50.543993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.055 #44 NEW cov: 12452 ft: 15432 corp: 27/555b lim: 40 exec/s: 44 rss: 75Mb L: 20/39 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:31.315 [2024-10-29 22:14:50.594126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffdf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.594151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.315 [2024-10-29 22:14:50.594246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.594264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.315 #45 NEW cov: 12452 ft: 15439 corp: 28/572b lim: 40 exec/s: 45 rss: 75Mb L: 17/39 MS: 1 EraseBytes- 00:07:31.315 [2024-10-29 22:14:50.645099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.645123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.315 [2024-10-29 22:14:50.645227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8affffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.645242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.315 [2024-10-29 22:14:50.645338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffdfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.645353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.315 [2024-10-29 22:14:50.645443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.645457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.315 #46 NEW cov: 12452 ft: 15443 corp: 29/608b lim: 40 exec/s: 46 rss: 75Mb L: 36/39 MS: 1 CrossOver- 00:07:31.315 [2024-10-29 22:14:50.694842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.694868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.315 [2024-10-29 22:14:50.695003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.315 [2024-10-29 22:14:50.695019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.315 #47 NEW cov: 12452 ft: 15461 corp: 30/628b lim: 40 exec/s: 23 rss: 76Mb L: 20/39 MS: 1 InsertRepeatedBytes- 00:07:31.315 #47 DONE cov: 12452 ft: 15461 corp: 30/628b lim: 40 exec/s: 23 rss: 76Mb 00:07:31.315 ###### Recommended dictionary. ###### 00:07:31.315 "\001\000\000\000\000\000\000\010" # Uses: 1 00:07:31.315 "\025\000" # Uses: 0 00:07:31.315 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:31.315 ###### End of recommended dictionary. ###### 00:07:31.315 Done 47 runs in 2 second(s) 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.574 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.575 22:14:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:31.575 [2024-10-29 22:14:50.892992] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:31.575 [2024-10-29 22:14:50.893059] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103799 ] 00:07:31.834 [2024-10-29 22:14:51.099108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.834 [2024-10-29 22:14:51.139258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.834 [2024-10-29 22:14:51.199020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.834 [2024-10-29 22:14:51.215180] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:31.834 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.834 INFO: Seed: 3980865335 00:07:31.834 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:31.834 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:31.834 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:31.834 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.834 #2 INITED exec/s: 0 rss: 66Mb 00:07:31.834 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:31.834 This may also happen if the target rejected all inputs we tried so far 00:07:31.834 [2024-10-29 22:14:51.273820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.834 [2024-10-29 22:14:51.273849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.094 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:32.094 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.094 #3 NEW cov: 12238 ft: 12226 corp: 2/9b lim: 40 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:32.094 [2024-10-29 22:14:51.615386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.094 [2024-10-29 22:14:51.615448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.094 [2024-10-29 22:14:51.615534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.094 [2024-10-29 22:14:51.615561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.094 [2024-10-29 22:14:51.615647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.094 [2024-10-29 22:14:51.615672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.354 #7 NEW cov: 12351 ft: 13642 corp: 3/37b lim: 40 exec/s: 0 rss: 74Mb L: 28/28 MS: 4 EraseBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:32.354 [2024-10-29 22:14:51.685239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.685269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.685332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.685347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.685401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.685415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.354 #8 NEW cov: 12357 ft: 13835 corp: 4/65b lim: 40 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 ChangeBit- 00:07:32.354 [2024-10-29 22:14:51.745366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.745394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.745451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.745465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.745519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.745532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.354 #9 NEW cov: 12442 ft: 14092 corp: 5/94b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 InsertByte- 00:07:32.354 [2024-10-29 22:14:51.785315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.785340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.785396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.785410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.354 #10 NEW cov: 12442 ft: 14363 corp: 6/112b lim: 40 exec/s: 0 rss: 74Mb L: 18/29 MS: 1 EraseBytes- 00:07:32.354 [2024-10-29 22:14:51.845608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.845634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.845689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.845707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.354 [2024-10-29 22:14:51.845763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.354 [2024-10-29 22:14:51.845777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.613 #11 NEW cov: 12442 ft: 14515 corp: 7/140b lim: 40 exec/s: 0 rss: 74Mb L: 28/29 MS: 1 ShuffleBytes- 00:07:32.613 [2024-10-29 22:14:51.906113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.906138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.613 [2024-10-29 22:14:51.906196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.906211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.613 [2024-10-29 22:14:51.906268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.906283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.613 [2024-10-29 22:14:51.906342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.906356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.613 [2024-10-29 22:14:51.906411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.906425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.613 #14 NEW cov: 12442 ft: 14955 corp: 8/180b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:07:32.613 [2024-10-29 22:14:51.945887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.613 [2024-10-29 22:14:51.945913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:51.945968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.945982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:51.946038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.946052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.614 #15 NEW cov: 12442 ft: 15014 corp: 9/208b lim: 40 exec/s: 0 rss: 74Mb L: 28/40 MS: 1 CopyPart- 00:07:32.614 [2024-10-29 22:14:51.986153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.986178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:51.986236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.986254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:51.986312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0ffffff cdw11:ffffffb0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.986327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:51.986382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:51.986396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.614 #16 NEW cov: 12442 ft: 15069 corp: 10/242b lim: 40 exec/s: 0 rss: 74Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:32.614 [2024-10-29 22:14:52.026432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.026458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.026516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:aeaeb0b0 cdw11:b0b0aeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.026530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.026585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.026599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.026653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.026666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.026723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.026738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.614 #17 NEW cov: 12442 ft: 15110 corp: 11/282b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:07:32.614 [2024-10-29 22:14:52.085970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.085996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.614 #18 NEW cov: 12442 ft: 15239 corp: 12/290b lim: 40 exec/s: 0 rss: 75Mb L: 8/40 MS: 1 ChangeBit- 00:07:32.614 [2024-10-29 22:14:52.126542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.126568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.126624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:94000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.126638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.126695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.126713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.614 [2024-10-29 22:14:52.126766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0b00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.614 [2024-10-29 22:14:52.126780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.873 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:32.873 #19 NEW cov: 12465 ft: 15328 corp: 13/322b lim: 40 exec/s: 0 rss: 75Mb L: 32/40 MS: 1 CMP- DE: "\224\000\000\000"- 00:07:32.873 [2024-10-29 22:14:52.186586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.186612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.186670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.186684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.186739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.186753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.873 #20 NEW cov: 12465 ft: 15339 corp: 14/347b lim: 40 exec/s: 0 rss: 75Mb L: 25/40 MS: 1 InsertRepeatedBytes- 00:07:32.873 [2024-10-29 22:14:52.246770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.246798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.246856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.246871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.246930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b094 cdw11:000000b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.246945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.873 #21 NEW cov: 12465 ft: 15360 corp: 15/375b lim: 40 exec/s: 21 rss: 75Mb L: 28/40 MS: 1 PersAutoDict- DE: "\224\000\000\000"- 00:07:32.873 [2024-10-29 22:14:52.306747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7e0000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.306772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.306832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.306846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.873 #22 NEW cov: 12465 ft: 15393 corp: 16/393b lim: 40 exec/s: 22 rss: 75Mb L: 18/40 MS: 1 ChangeByte- 00:07:32.873 [2024-10-29 22:14:52.347149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.347178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.347237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.347251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.347309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.347323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.347376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0f30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.347389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.873 #23 NEW cov: 12465 ft: 15401 corp: 17/425b lim: 40 exec/s: 23 rss: 75Mb L: 32/40 MS: 1 CopyPart- 00:07:32.873 [2024-10-29 22:14:52.386972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7e000a cdw11:008000b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.386998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.873 [2024-10-29 22:14:52.387055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.873 [2024-10-29 22:14:52.387069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.132 #24 NEW cov: 12465 ft: 15420 corp: 18/443b lim: 40 exec/s: 24 rss: 75Mb L: 18/40 MS: 1 CrossOver- 00:07:33.132 [2024-10-29 22:14:52.446983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00f8ff cdw11:fffe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.447008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.132 #25 NEW cov: 12465 ft: 15471 corp: 19/451b lim: 40 exec/s: 25 rss: 75Mb L: 8/40 MS: 1 ChangeBinInt- 00:07:33.132 [2024-10-29 22:14:52.507143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:94000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.507169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.132 #26 NEW cov: 12465 ft: 15534 corp: 20/459b lim: 40 exec/s: 26 rss: 75Mb L: 8/40 MS: 1 PersAutoDict- DE: "\224\000\000\000"- 00:07:33.132 [2024-10-29 22:14:52.547898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.547925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.132 [2024-10-29 22:14:52.547982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:aeaeb0b0 cdw11:b0b0aeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.547997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.132 [2024-10-29 22:14:52.548055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.548069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.132 [2024-10-29 22:14:52.548125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.132 [2024-10-29 22:14:52.548142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.548199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.548213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.133 #27 NEW cov: 12465 ft: 15567 corp: 21/499b lim: 40 exec/s: 27 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:07:33.133 [2024-10-29 22:14:52.607584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.607609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.607664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b041b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.607678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.133 #28 NEW cov: 12465 ft: 15579 corp: 22/517b lim: 40 exec/s: 28 rss: 75Mb L: 18/40 MS: 1 ChangeByte- 00:07:33.133 [2024-10-29 22:14:52.648166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.648192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.648249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.648264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.648326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.648340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.648397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:aeaeaeae cdw11:aeaeaeae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.648410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.133 [2024-10-29 22:14:52.648466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:aeaeaeae cdw11:aeae2bae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.133 [2024-10-29 22:14:52.648480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.392 #29 NEW cov: 12465 ft: 15641 corp: 23/557b lim: 40 exec/s: 29 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:33.392 [2024-10-29 22:14:52.687977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.688004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.688065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.688080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.688137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b094 cdw11:00000909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.688155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.392 #30 NEW cov: 12465 ft: 15668 corp: 24/588b lim: 40 exec/s: 30 rss: 75Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:33.392 [2024-10-29 22:14:52.748313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ab0b0b0 cdw11:008000b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.748357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.748419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b094 cdw11:000000b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.748435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.748493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.748509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.748566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.748581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.392 #31 NEW cov: 12465 ft: 15705 corp: 25/623b lim: 40 exec/s: 31 rss: 75Mb L: 35/40 MS: 1 CopyPart- 00:07:33.392 [2024-10-29 22:14:52.808454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.808481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.808539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.808553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.808611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0ffffff cdw11:fffffdb0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.808625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.808683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.808696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.392 #32 NEW cov: 12465 ft: 15719 corp: 26/657b lim: 40 exec/s: 32 rss: 75Mb L: 34/40 MS: 1 ChangeBit- 00:07:33.392 [2024-10-29 22:14:52.868319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7e0000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.868344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.868402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0940000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.868416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.392 #33 NEW cov: 12465 ft: 15808 corp: 27/675b lim: 40 exec/s: 33 rss: 75Mb L: 18/40 MS: 1 PersAutoDict- DE: "\224\000\000\000"- 00:07:33.392 [2024-10-29 22:14:52.908591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.908617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.908675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.908690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.392 [2024-10-29 22:14:52.908746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b09400 cdw11:0000b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.392 [2024-10-29 22:14:52.908760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.652 #34 NEW cov: 12465 ft: 15854 corp: 28/703b lim: 40 exec/s: 34 rss: 75Mb L: 28/40 MS: 1 PersAutoDict- DE: "\224\000\000\000"- 00:07:33.652 [2024-10-29 22:14:52.948397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00f800 cdw11:f8fffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:52.948423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.652 #35 NEW cov: 12465 ft: 15868 corp: 29/711b lim: 40 exec/s: 35 rss: 76Mb L: 8/40 MS: 1 CopyPart- 00:07:33.652 [2024-10-29 22:14:53.009183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.009209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.009268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b09400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.009282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.009339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.009354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.009410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.009425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.009481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:b0b0b0b0 cdw11:b0f30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.009495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.652 #36 NEW cov: 12465 ft: 15902 corp: 30/751b lim: 40 exec/s: 36 rss: 76Mb L: 40/40 MS: 1 CrossOver- 00:07:33.652 [2024-10-29 22:14:53.068884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7e0700 cdw11:00b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.068911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.068970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b09400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.068985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.652 #37 NEW cov: 12465 ft: 15911 corp: 31/770b lim: 40 exec/s: 37 rss: 76Mb L: 19/40 MS: 1 InsertByte- 00:07:33.652 [2024-10-29 22:14:53.129247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.129274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.129333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.129348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.129401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.129415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.652 #38 NEW cov: 12465 ft: 15945 corp: 32/798b lim: 40 exec/s: 38 rss: 76Mb L: 28/40 MS: 1 ShuffleBytes- 00:07:33.652 [2024-10-29 22:14:53.169207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7e0000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.169233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.652 [2024-10-29 22:14:53.169291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0b0b0b0 cdw11:b090b0f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.652 [2024-10-29 22:14:53.169311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.912 #39 NEW cov: 12465 ft: 15951 corp: 33/816b lim: 40 exec/s: 39 rss: 76Mb L: 18/40 MS: 1 ChangeBit- 00:07:33.912 [2024-10-29 22:14:53.209751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a008000 cdw11:b0b0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.209777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.912 [2024-10-29 22:14:53.209837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b0ffffff cdw11:ffffffb0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.209851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.912 [2024-10-29 22:14:53.209906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b0b0b0b0 cdw11:b0b0b0ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.209920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.912 [2024-10-29 22:14:53.209975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fdb0b0b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.209989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.912 [2024-10-29 22:14:53.210046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:b0b0b0b0 cdw11:b0b00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.210060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.912 #40 NEW cov: 12465 ft: 15955 corp: 34/856b lim: 40 exec/s: 40 rss: 76Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:33.912 [2024-10-29 22:14:53.269327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:b0b0f300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.912 [2024-10-29 22:14:53.269356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.912 #41 NEW cov: 12465 ft: 15957 corp: 35/865b lim: 40 exec/s: 20 rss: 76Mb L: 9/40 MS: 1 EraseBytes- 00:07:33.912 #41 DONE cov: 12465 ft: 15957 corp: 35/865b lim: 40 exec/s: 20 rss: 76Mb 00:07:33.912 ###### Recommended dictionary. ###### 00:07:33.912 "\224\000\000\000" # Uses: 4 00:07:33.912 ###### End of recommended dictionary. ###### 00:07:33.913 Done 41 runs in 2 second(s) 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:33.913 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:34.172 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:34.172 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.172 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.172 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.172 22:14:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:34.172 [2024-10-29 22:14:53.470054] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:34.172 [2024-10-29 22:14:53.470132] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104159 ] 00:07:34.172 [2024-10-29 22:14:53.670046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.431 [2024-10-29 22:14:53.709340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.431 [2024-10-29 22:14:53.768531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.431 [2024-10-29 22:14:53.784695] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:34.431 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.431 INFO: Seed: 2257888050 00:07:34.431 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:34.431 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:34.431 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:34.431 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.431 #2 INITED exec/s: 0 rss: 66Mb 00:07:34.431 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.431 This may also happen if the target rejected all inputs we tried so far 00:07:34.431 [2024-10-29 22:14:53.850685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.431 [2024-10-29 22:14:53.850715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.431 [2024-10-29 22:14:53.850772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.431 [2024-10-29 22:14:53.850787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.431 [2024-10-29 22:14:53.850839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.431 [2024-10-29 22:14:53.850854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.431 [2024-10-29 22:14:53.850907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.431 [2024-10-29 22:14:53.850921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.690 NEW_FUNC[1/714]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:34.690 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.690 #3 NEW cov: 12191 ft: 12217 corp: 2/34b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:34.690 [2024-10-29 22:14:54.191497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.690 [2024-10-29 22:14:54.191558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.950 NEW_FUNC[1/2]: 0x1f75498 in spdk_thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1180 00:07:34.950 NEW_FUNC[2/2]: 0x1f75c78 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1080 00:07:34.950 #14 NEW cov: 12348 ft: 13572 corp: 3/49b lim: 40 exec/s: 0 rss: 74Mb L: 15/33 MS: 1 CrossOver- 00:07:34.950 [2024-10-29 22:14:54.261650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.261681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.261736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5ffffff cdw11:ffffd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.261751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.261804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.261818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.261872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.261886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.950 #15 NEW cov: 12354 ft: 13939 corp: 4/87b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:34.950 [2024-10-29 22:14:54.301260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.301286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.950 #16 NEW cov: 12439 ft: 14193 corp: 5/102b lim: 40 exec/s: 0 rss: 74Mb L: 15/38 MS: 1 ShuffleBytes- 00:07:34.950 [2024-10-29 22:14:54.361391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d55dd5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.361417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.950 #17 NEW cov: 12439 ft: 14344 corp: 6/117b lim: 40 exec/s: 0 rss: 74Mb L: 15/38 MS: 1 ChangeByte- 00:07:34.950 [2024-10-29 22:14:54.401960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.401986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.402043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5ffffff cdw11:ffd5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.402058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.402113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.402127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.950 [2024-10-29 22:14:54.402182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.950 [2024-10-29 22:14:54.402197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.950 #18 NEW cov: 12439 ft: 14445 corp: 7/150b lim: 40 exec/s: 0 rss: 74Mb L: 33/38 MS: 1 EraseBytes- 00:07:34.950 [2024-10-29 22:14:54.461685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:feffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.951 [2024-10-29 22:14:54.461711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.210 #19 NEW cov: 12439 ft: 14551 corp: 8/165b lim: 40 exec/s: 0 rss: 74Mb L: 15/38 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:07:35.210 [2024-10-29 22:14:54.522373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.522399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.522456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.522470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.522526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.522540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.522591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d55dd5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.522608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.210 #20 NEW cov: 12439 ft: 14592 corp: 9/199b lim: 40 exec/s: 0 rss: 74Mb L: 34/38 MS: 1 InsertRepeatedBytes- 00:07:35.210 [2024-10-29 22:14:54.582477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.582503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.582558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.582572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.582627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.582641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.582694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d55dd5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.582708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.210 #21 NEW cov: 12439 ft: 14653 corp: 10/233b lim: 40 exec/s: 0 rss: 74Mb L: 34/38 MS: 1 ChangeBit- 00:07:35.210 [2024-10-29 22:14:54.642654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.642681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.642737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.642751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.642807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.642821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.210 [2024-10-29 22:14:54.642873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d55dd5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.642889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.210 #22 NEW cov: 12439 ft: 14685 corp: 11/267b lim: 40 exec/s: 0 rss: 75Mb L: 34/38 MS: 1 ShuffleBytes- 00:07:35.210 [2024-10-29 22:14:54.702374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.210 [2024-10-29 22:14:54.702400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.210 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:35.210 #23 NEW cov: 12462 ft: 14742 corp: 12/282b lim: 40 exec/s: 0 rss: 75Mb L: 15/38 MS: 1 ChangeBit- 00:07:35.470 [2024-10-29 22:14:54.742941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.742968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.743029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.743044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.743100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.743115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.743172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.743185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.470 #24 NEW cov: 12462 ft: 14782 corp: 13/315b lim: 40 exec/s: 0 rss: 75Mb L: 33/38 MS: 1 CopyPart- 00:07:35.470 [2024-10-29 22:14:54.783051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.783077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.783134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.783148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.783203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.783218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.783271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d55dd5d5 cdw11:d5d5d5a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.783285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.470 #25 NEW cov: 12462 ft: 14789 corp: 14/350b lim: 40 exec/s: 0 rss: 75Mb L: 35/38 MS: 1 InsertByte- 00:07:35.470 [2024-10-29 22:14:54.842713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:dbd55dd5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.842739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 #26 NEW cov: 12462 ft: 14837 corp: 15/365b lim: 40 exec/s: 26 rss: 75Mb L: 15/38 MS: 1 ChangeByte- 00:07:35.470 [2024-10-29 22:14:54.882865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.882891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 #27 NEW cov: 12462 ft: 14882 corp: 16/380b lim: 40 exec/s: 27 rss: 75Mb L: 15/38 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:35.470 [2024-10-29 22:14:54.922985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2a0ad5 cdw11:d5020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.923010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 #28 NEW cov: 12462 ft: 14893 corp: 17/395b lim: 40 exec/s: 28 rss: 75Mb L: 15/38 MS: 1 ChangeByte- 00:07:35.470 [2024-10-29 22:14:54.983632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.983658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.983715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5ffffff cdw11:ffffd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.983729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.983782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.983796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.470 [2024-10-29 22:14:54.983851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.470 [2024-10-29 22:14:54.983866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.729 #29 NEW cov: 12462 ft: 14919 corp: 18/430b lim: 40 exec/s: 29 rss: 75Mb L: 35/38 MS: 1 EraseBytes- 00:07:35.729 [2024-10-29 22:14:55.023739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:feffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.023765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.729 [2024-10-29 22:14:55.023824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5ffffff cdw11:ffd5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.023838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.729 [2024-10-29 22:14:55.023896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.023910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.729 [2024-10-29 22:14:55.023963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.023977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.729 #30 NEW cov: 12462 ft: 15005 corp: 19/463b lim: 40 exec/s: 30 rss: 75Mb L: 33/38 MS: 1 PersAutoDict- DE: "\376\377\377\377\000\000\000\000"- 00:07:35.729 [2024-10-29 22:14:55.083409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.083438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.729 #31 NEW cov: 12462 ft: 15041 corp: 20/473b lim: 40 exec/s: 31 rss: 75Mb L: 10/38 MS: 1 InsertRepeatedBytes- 00:07:35.729 [2024-10-29 22:14:55.124013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.729 [2024-10-29 22:14:55.124039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.124096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d52a2a2a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.124110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.124170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffd5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.124184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.124241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.124255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.730 #32 NEW cov: 12462 ft: 15067 corp: 21/511b lim: 40 exec/s: 32 rss: 75Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:35.730 [2024-10-29 22:14:55.184158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.184184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.184241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.184255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.184313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.184328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.730 [2024-10-29 22:14:55.184384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0000d55d cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.184398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.730 #33 NEW cov: 12462 ft: 15079 corp: 22/545b lim: 40 exec/s: 33 rss: 75Mb L: 34/38 MS: 1 CopyPart- 00:07:35.730 [2024-10-29 22:14:55.223843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:feffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.730 [2024-10-29 22:14:55.223871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.730 #34 NEW cov: 12462 ft: 15082 corp: 23/554b lim: 40 exec/s: 34 rss: 75Mb L: 9/38 MS: 1 PersAutoDict- DE: "\376\377\377\377\000\000\000\000"- 00:07:35.989 [2024-10-29 22:14:55.264465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d52b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.264492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.264550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.264565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.264620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.264635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.264689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.264707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.989 #35 NEW cov: 12462 ft: 15099 corp: 24/588b lim: 40 exec/s: 35 rss: 75Mb L: 34/38 MS: 1 InsertByte- 00:07:35.989 [2024-10-29 22:14:55.304386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.304413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.304467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.304482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.304536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000ad5d5 cdw11:d5000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.304550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.989 #36 NEW cov: 12462 ft: 15365 corp: 25/619b lim: 40 exec/s: 36 rss: 75Mb L: 31/38 MS: 1 CrossOver- 00:07:35.989 [2024-10-29 22:14:55.344682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.344709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.344763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.344777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.344830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5ffff cdw11:ffffd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.344844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.344896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.344909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.989 #37 NEW cov: 12462 ft: 15374 corp: 26/652b lim: 40 exec/s: 37 rss: 75Mb L: 33/38 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:35.989 [2024-10-29 22:14:55.404537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ad50a cdw11:d5dbd55d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.404563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.989 [2024-10-29 22:14:55.404619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.404634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.989 #38 NEW cov: 12462 ft: 15579 corp: 27/668b lim: 40 exec/s: 38 rss: 75Mb L: 16/38 MS: 1 CrossOver- 00:07:35.989 [2024-10-29 22:14:55.444458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.444485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.989 #39 NEW cov: 12462 ft: 15643 corp: 28/683b lim: 40 exec/s: 39 rss: 75Mb L: 15/38 MS: 1 CrossOver- 00:07:35.989 [2024-10-29 22:14:55.504627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:feff00ff cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.989 [2024-10-29 22:14:55.504653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 #41 NEW cov: 12462 ft: 15698 corp: 29/692b lim: 40 exec/s: 41 rss: 75Mb L: 9/38 MS: 2 EraseBytes-CopyPart- 00:07:36.247 [2024-10-29 22:14:55.565287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.565319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.565374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.565387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.565441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d523 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.565455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.565509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.565523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.247 #42 NEW cov: 12462 ft: 15712 corp: 30/726b lim: 40 exec/s: 42 rss: 75Mb L: 34/38 MS: 1 InsertByte- 00:07:36.247 [2024-10-29 22:14:55.604922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:feff00ff cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.604948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 #43 NEW cov: 12462 ft: 15730 corp: 31/736b lim: 40 exec/s: 43 rss: 75Mb L: 10/38 MS: 1 InsertByte- 00:07:36.247 [2024-10-29 22:14:55.665605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.665632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.665686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.665700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.665754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d523 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.665769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.665822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5fad5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.665836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.247 #44 NEW cov: 12462 ft: 15756 corp: 32/770b lim: 40 exec/s: 44 rss: 75Mb L: 34/38 MS: 1 ChangeByte- 00:07:36.247 [2024-10-29 22:14:55.725285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ed50ad5 cdw11:d5d55dd5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.725321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 #45 NEW cov: 12462 ft: 15761 corp: 33/785b lim: 40 exec/s: 45 rss: 75Mb L: 15/38 MS: 1 ChangeBit- 00:07:36.247 [2024-10-29 22:14:55.765507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad50ad5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.765533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.247 [2024-10-29 22:14:55.765586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d59a cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.247 [2024-10-29 22:14:55.765601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.506 #46 NEW cov: 12462 ft: 15766 corp: 34/801b lim: 40 exec/s: 46 rss: 75Mb L: 16/38 MS: 1 InsertByte- 00:07:36.506 [2024-10-29 22:14:55.805955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ad5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.506 [2024-10-29 22:14:55.805981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.506 [2024-10-29 22:14:55.806035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.506 [2024-10-29 22:14:55.806049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.506 [2024-10-29 22:14:55.806101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5efd5 cdw11:23d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.506 [2024-10-29 22:14:55.806115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.506 [2024-10-29 22:14:55.806169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.506 [2024-10-29 22:14:55.806183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.506 #47 NEW cov: 12462 ft: 15773 corp: 35/836b lim: 40 exec/s: 47 rss: 75Mb L: 35/38 MS: 1 InsertByte- 00:07:36.506 [2024-10-29 22:14:55.845597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.506 [2024-10-29 22:14:55.845623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.506 #48 NEW cov: 12462 ft: 15788 corp: 36/851b lim: 40 exec/s: 24 rss: 75Mb L: 15/38 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:36.506 #48 DONE cov: 12462 ft: 15788 corp: 36/851b lim: 40 exec/s: 24 rss: 75Mb 00:07:36.506 ###### Recommended dictionary. ###### 00:07:36.506 "\376\377\377\377\000\000\000\000" # Uses: 2 00:07:36.506 "\002\000\000\000\000\000\000\000" # Uses: 1 00:07:36.506 "\377\377\377\377" # Uses: 0 00:07:36.506 ###### End of recommended dictionary. ###### 00:07:36.506 Done 48 runs in 2 second(s) 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.506 22:14:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.506 22:14:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:36.764 [2024-10-29 22:14:56.039643] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:36.764 [2024-10-29 22:14:56.039704] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104516 ] 00:07:36.764 [2024-10-29 22:14:56.251041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.022 [2024-10-29 22:14:56.290640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.022 [2024-10-29 22:14:56.349841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.022 [2024-10-29 22:14:56.366017] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:37.022 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.022 INFO: Seed: 543921988 00:07:37.022 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:37.022 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:37.022 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:37.022 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.022 #2 INITED exec/s: 0 rss: 66Mb 00:07:37.022 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.022 This may also happen if the target rejected all inputs we tried so far 00:07:37.022 [2024-10-29 22:14:56.431490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.022 [2024-10-29 22:14:56.431521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.281 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:37.281 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.281 #3 NEW cov: 12177 ft: 12201 corp: 2/11b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:37.281 [2024-10-29 22:14:56.772586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.281 [2024-10-29 22:14:56.772646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 NEW_FUNC[1/1]: 0x1f75c78 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1080 00:07:37.565 #4 NEW cov: 12336 ft: 12927 corp: 3/21b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:07:37.565 [2024-10-29 22:14:56.842654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:56.842685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:56.842748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:56.842762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 #15 NEW cov: 12342 ft: 13510 corp: 4/41b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:07:37.565 [2024-10-29 22:14:56.902765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:56.902795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:56.902854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:56.902869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 #16 NEW cov: 12427 ft: 13910 corp: 5/61b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:37.565 [2024-10-29 22:14:56.962843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:56.962869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 #17 NEW cov: 12427 ft: 14047 corp: 6/71b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 CopyPart- 00:07:37.565 [2024-10-29 22:14:57.003204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.003229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.003289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.003310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.003367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c60a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.003381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.565 #18 NEW cov: 12427 ft: 14334 corp: 7/95b lim: 40 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:37.565 [2024-10-29 22:14:57.043569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.043593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.043653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.043667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.043741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.043755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.043816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.043830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.565 [2024-10-29 22:14:57.043887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-10-29 22:14:57.043901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.822 #19 NEW cov: 12427 ft: 14881 corp: 8/135b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:37.822 [2024-10-29 22:14:57.103240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:fff7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.103265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.822 #20 NEW cov: 12427 ft: 14963 corp: 9/145b lim: 40 exec/s: 0 rss: 75Mb L: 10/40 MS: 1 ChangeBit- 00:07:37.822 [2024-10-29 22:14:57.143440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:fe66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.143465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.822 [2024-10-29 22:14:57.143528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.143542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.822 #21 NEW cov: 12427 ft: 15012 corp: 10/165b lim: 40 exec/s: 0 rss: 75Mb L: 20/40 MS: 1 ChangeBit- 00:07:37.822 [2024-10-29 22:14:57.203914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.203940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.822 [2024-10-29 22:14:57.203998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.204012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.822 [2024-10-29 22:14:57.204070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffe66 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.204084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.822 [2024-10-29 22:14:57.204141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.204154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.822 #22 NEW cov: 12427 ft: 15059 corp: 11/199b lim: 40 exec/s: 0 rss: 75Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:37.822 [2024-10-29 22:14:57.263786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:fe66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.263815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.822 [2024-10-29 22:14:57.263872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.263887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.822 #23 NEW cov: 12427 ft: 15153 corp: 12/222b lim: 40 exec/s: 0 rss: 75Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:07:37.822 [2024-10-29 22:14:57.303777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.303802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.822 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:37.822 #24 NEW cov: 12450 ft: 15230 corp: 13/233b lim: 40 exec/s: 0 rss: 75Mb L: 11/40 MS: 1 EraseBytes- 00:07:37.822 [2024-10-29 22:14:57.343982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.822 [2024-10-29 22:14:57.344008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.823 [2024-10-29 22:14:57.344068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.823 [2024-10-29 22:14:57.344083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.080 #25 NEW cov: 12450 ft: 15256 corp: 14/256b lim: 40 exec/s: 0 rss: 75Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:07:38.080 [2024-10-29 22:14:57.384257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:fe66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.384281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.080 [2024-10-29 22:14:57.384344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:420affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.384358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.080 #26 NEW cov: 12450 ft: 15281 corp: 15/279b lim: 40 exec/s: 26 rss: 75Mb L: 23/40 MS: 1 ChangeByte- 00:07:38.080 [2024-10-29 22:14:57.444429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.444454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.080 [2024-10-29 22:14:57.444514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff9292 cdw11:92929292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.444528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.080 #27 NEW cov: 12450 ft: 15313 corp: 16/299b lim: 40 exec/s: 27 rss: 75Mb L: 20/40 MS: 1 InsertRepeatedBytes- 00:07:38.080 [2024-10-29 22:14:57.484389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff24 cdw11:fffff7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.484414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.080 #28 NEW cov: 12450 ft: 15367 corp: 17/310b lim: 40 exec/s: 28 rss: 75Mb L: 11/40 MS: 1 InsertByte- 00:07:38.080 [2024-10-29 22:14:57.544598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aff3024 cdw11:fffff7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.080 [2024-10-29 22:14:57.544623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.080 #29 NEW cov: 12450 ft: 15371 corp: 18/321b lim: 40 exec/s: 29 rss: 75Mb L: 11/40 MS: 1 ChangeByte- 00:07:38.338 [2024-10-29 22:14:57.605144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.605170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.605235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.605249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.605311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.605333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.605393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.605406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.338 #30 NEW cov: 12450 ft: 15402 corp: 19/355b lim: 40 exec/s: 30 rss: 75Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:38.338 [2024-10-29 22:14:57.645003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.645029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.645087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0aff66 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.645101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.338 #31 NEW cov: 12450 ft: 15460 corp: 20/372b lim: 40 exec/s: 31 rss: 75Mb L: 17/40 MS: 1 EraseBytes- 00:07:38.338 [2024-10-29 22:14:57.685256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0a363636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.685281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.685345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36363636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.685360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.685419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:36363636 cdw11:36363636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.685433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.338 #35 NEW cov: 12450 ft: 15559 corp: 21/400b lim: 40 exec/s: 35 rss: 75Mb L: 28/40 MS: 4 CrossOver-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:38.338 [2024-10-29 22:14:57.745186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff32 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.745214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.338 #36 NEW cov: 12450 ft: 15616 corp: 22/410b lim: 40 exec/s: 36 rss: 75Mb L: 10/40 MS: 1 ChangeByte- 00:07:38.338 [2024-10-29 22:14:57.785646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.785670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.785730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.785743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.785800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffe66 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.785814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.338 [2024-10-29 22:14:57.785871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.785884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.338 #37 NEW cov: 12450 ft: 15642 corp: 23/444b lim: 40 exec/s: 37 rss: 75Mb L: 34/40 MS: 1 CMP- DE: "\013\000"- 00:07:38.338 [2024-10-29 22:14:57.845441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.338 [2024-10-29 22:14:57.845467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 #43 NEW cov: 12450 ft: 15655 corp: 24/454b lim: 40 exec/s: 43 rss: 75Mb L: 10/40 MS: 1 ShuffleBytes- 00:07:38.597 [2024-10-29 22:14:57.885918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.885943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.886001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.886015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.886088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.886102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.886161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.886175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.597 #44 NEW cov: 12450 ft: 15661 corp: 25/490b lim: 40 exec/s: 44 rss: 75Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:07:38.597 [2024-10-29 22:14:57.925627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aff0000 cdw11:000bf7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.925652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 #45 NEW cov: 12450 ft: 15684 corp: 26/501b lim: 40 exec/s: 45 rss: 75Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:38.597 [2024-10-29 22:14:57.986216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0b000b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.986242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.986303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.986317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.986373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:fe66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.986387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:57.986445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff0affff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:57.986458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.597 #46 NEW cov: 12450 ft: 15716 corp: 27/537b lim: 40 exec/s: 46 rss: 75Mb L: 36/40 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:38.597 [2024-10-29 22:14:58.046255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.046281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:58.046372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.046388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:58.046447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.046460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.597 #47 NEW cov: 12450 ft: 15738 corp: 28/568b lim: 40 exec/s: 47 rss: 75Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:38.597 [2024-10-29 22:14:58.086491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.086516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:58.086579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.086593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:58.086652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffe66 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.086666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.597 [2024-10-29 22:14:58.086725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.597 [2024-10-29 22:14:58.086742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.597 #48 NEW cov: 12450 ft: 15760 corp: 29/604b lim: 40 exec/s: 48 rss: 75Mb L: 36/40 MS: 1 CopyPart- 00:07:38.855 [2024-10-29 22:14:58.126592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff32 cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.126617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.126678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fffffe66 cdw11:ffff420a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.126691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.126751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.126764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.126825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.126838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.855 #49 NEW cov: 12450 ft: 15767 corp: 30/637b lim: 40 exec/s: 49 rss: 75Mb L: 33/40 MS: 1 CrossOver- 00:07:38.855 [2024-10-29 22:14:58.186791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.186817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.186876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.186890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.186963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.186978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.187038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.187052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.855 #50 NEW cov: 12450 ft: 15786 corp: 31/671b lim: 40 exec/s: 50 rss: 75Mb L: 34/40 MS: 1 CrossOver- 00:07:38.855 [2024-10-29 22:14:58.226644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:fe66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.226671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.226729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ff66ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.226743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.855 #51 NEW cov: 12450 ft: 15793 corp: 32/694b lim: 40 exec/s: 51 rss: 75Mb L: 23/40 MS: 1 ShuffleBytes- 00:07:38.855 [2024-10-29 22:14:58.267106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:0b00ff66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.267132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.267195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.267209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.267270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.267284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.267348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0affffff cdw11:ffff66ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.267362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.326927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:0b00ff66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.326952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.327011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.327025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.855 #53 NEW cov: 12450 ft: 15805 corp: 33/716b lim: 40 exec/s: 53 rss: 76Mb L: 22/40 MS: 2 PersAutoDict-EraseBytes- DE: "\013\000"- 00:07:38.855 [2024-10-29 22:14:58.367019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:0b00ff66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.367044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.855 [2024-10-29 22:14:58.367103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff4a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.855 [2024-10-29 22:14:58.367118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.112 #54 NEW cov: 12450 ft: 15856 corp: 34/738b lim: 40 exec/s: 27 rss: 76Mb L: 22/40 MS: 1 ChangeBit- 00:07:39.112 #54 DONE cov: 12450 ft: 15856 corp: 34/738b lim: 40 exec/s: 27 rss: 76Mb 00:07:39.112 ###### Recommended dictionary. ###### 00:07:39.112 "\013\000" # Uses: 2 00:07:39.112 ###### End of recommended dictionary. ###### 00:07:39.112 Done 54 runs in 2 second(s) 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.112 22:14:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:39.112 [2024-10-29 22:14:58.562129] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:39.112 [2024-10-29 22:14:58.562196] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104875 ] 00:07:39.370 [2024-10-29 22:14:58.763629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.370 [2024-10-29 22:14:58.802320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.370 [2024-10-29 22:14:58.861467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.370 [2024-10-29 22:14:58.877629] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:39.370 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.370 INFO: Seed: 3053954966 00:07:39.628 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:39.628 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:39.628 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:39.628 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.628 #2 INITED exec/s: 0 rss: 66Mb 00:07:39.628 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.628 This may also happen if the target rejected all inputs we tried so far 00:07:39.628 [2024-10-29 22:14:58.937408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.628 [2024-10-29 22:14:58.937441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.628 [2024-10-29 22:14:58.937515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.628 [2024-10-29 22:14:58.937532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.628 [2024-10-29 22:14:58.937554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.628 [2024-10-29 22:14:58.937570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.628 [2024-10-29 22:14:58.937628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.628 [2024-10-29 22:14:58.937643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.887 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:39.887 NEW_FUNC[2/717]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:39.887 #9 NEW cov: 12228 ft: 12221 corp: 2/30b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:39.887 [2024-10-29 22:14:59.280028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.280070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.887 [2024-10-29 22:14:59.280167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.280184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.887 [2024-10-29 22:14:59.280275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.280292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.887 #11 NEW cov: 12341 ft: 13098 corp: 3/55b lim: 35 exec/s: 0 rss: 74Mb L: 25/29 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:39.887 [2024-10-29 22:14:59.340775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.340807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.887 [2024-10-29 22:14:59.340898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.340916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.887 [2024-10-29 22:14:59.341014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.341033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.887 [2024-10-29 22:14:59.341131] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.887 [2024-10-29 22:14:59.341150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.887 #12 NEW cov: 12347 ft: 13340 corp: 4/84b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:40.146 [2024-10-29 22:14:59.420852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.420883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.420988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.421009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.421107] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.421123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.146 #15 NEW cov: 12432 ft: 13612 corp: 5/111b lim: 35 exec/s: 0 rss: 74Mb L: 27/29 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:07:40.146 [2024-10-29 22:14:59.471541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.471571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.471678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.471696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.471788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.471808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.471912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.471931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.146 #16 NEW cov: 12432 ft: 13782 corp: 6/140b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ChangeByte- 00:07:40.146 [2024-10-29 22:14:59.522001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.522030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.522141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.522159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.146 [2024-10-29 22:14:59.522256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.146 [2024-10-29 22:14:59.522277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.147 [2024-10-29 22:14:59.522377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.522396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.147 #17 NEW cov: 12432 ft: 13846 corp: 7/169b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:40.147 [2024-10-29 22:14:59.591649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.591679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.147 [2024-10-29 22:14:59.591788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.591804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.147 #22 NEW cov: 12439 ft: 14155 corp: 8/189b lim: 35 exec/s: 0 rss: 74Mb L: 20/29 MS: 5 ShuffleBytes-ShuffleBytes-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:40.147 [2024-10-29 22:14:59.642630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.642659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.147 [2024-10-29 22:14:59.642767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.642784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.147 [2024-10-29 22:14:59.642873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.642890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.147 [2024-10-29 22:14:59.642986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.147 [2024-10-29 22:14:59.643004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.147 #23 NEW cov: 12439 ft: 14188 corp: 9/218b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:40.406 [2024-10-29 22:14:59.692826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.692854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.692955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.692975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.693078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.693095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.693185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.693204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.406 #24 NEW cov: 12439 ft: 14246 corp: 10/247b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 CopyPart- 00:07:40.406 [2024-10-29 22:14:59.763232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.763259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.763359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.763378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.763471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.763491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.763578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.763596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.406 #25 NEW cov: 12439 ft: 14292 corp: 11/277b lim: 35 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 InsertByte- 00:07:40.406 [2024-10-29 22:14:59.833110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.833139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.833239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.833258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.406 [2024-10-29 22:14:59.833361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.406 [2024-10-29 22:14:59.833377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.406 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:40.406 #26 NEW cov: 12462 ft: 14345 corp: 12/302b lim: 35 exec/s: 0 rss: 75Mb L: 25/30 MS: 1 CopyPart- 00:07:40.407 [2024-10-29 22:14:59.883611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.407 [2024-10-29 22:14:59.883637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.407 [2024-10-29 22:14:59.883739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.407 [2024-10-29 22:14:59.883754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.407 [2024-10-29 22:14:59.883847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.407 [2024-10-29 22:14:59.883863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.407 [2024-10-29 22:14:59.883934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.407 [2024-10-29 22:14:59.883950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.407 #27 NEW cov: 12462 ft: 14394 corp: 13/331b lim: 35 exec/s: 0 rss: 75Mb L: 29/30 MS: 1 CrossOver- 00:07:40.666 [2024-10-29 22:14:59.933485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.933513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.666 [2024-10-29 22:14:59.933607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.933625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.666 [2024-10-29 22:14:59.933725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.933742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.666 #28 NEW cov: 12462 ft: 14421 corp: 14/356b lim: 35 exec/s: 28 rss: 75Mb L: 25/30 MS: 1 CopyPart- 00:07:40.666 [2024-10-29 22:14:59.984085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.984114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.666 [2024-10-29 22:14:59.984207] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.984228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.666 [2024-10-29 22:14:59.984325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.666 [2024-10-29 22:14:59.984343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.666 [2024-10-29 22:14:59.984435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:14:59.984455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.667 #29 NEW cov: 12462 ft: 14436 corp: 15/385b lim: 35 exec/s: 29 rss: 75Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:40.667 [2024-10-29 22:15:00.034210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.034244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.034346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.034365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.034460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.034478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.034572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.034591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.667 #30 NEW cov: 12462 ft: 14577 corp: 16/414b lim: 35 exec/s: 30 rss: 75Mb L: 29/30 MS: 1 CopyPart- 00:07:40.667 [2024-10-29 22:15:00.114642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.114680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.114770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.114787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.114878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.114899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.114995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.115014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.667 #31 NEW cov: 12462 ft: 14603 corp: 17/447b lim: 35 exec/s: 31 rss: 75Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:40.667 [2024-10-29 22:15:00.184703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.184740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.184830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.184849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.184962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.184977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.667 [2024-10-29 22:15:00.185061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.667 [2024-10-29 22:15:00.185081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.926 #32 NEW cov: 12462 ft: 14618 corp: 18/476b lim: 35 exec/s: 32 rss: 75Mb L: 29/33 MS: 1 ChangeByte- 00:07:40.926 [2024-10-29 22:15:00.254940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.254976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.255076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.255094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.255190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.255209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.255303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.255322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.926 #33 NEW cov: 12462 ft: 14641 corp: 19/506b lim: 35 exec/s: 33 rss: 75Mb L: 30/33 MS: 1 ChangeByte- 00:07:40.926 [2024-10-29 22:15:00.335308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.335346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.335456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.335471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.335566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.335582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.335677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.335696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.926 #34 NEW cov: 12462 ft: 14794 corp: 20/536b lim: 35 exec/s: 34 rss: 75Mb L: 30/33 MS: 1 InsertByte- 00:07:40.926 [2024-10-29 22:15:00.415137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.415172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.415292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.415320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.926 [2024-10-29 22:15:00.415444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.926 [2024-10-29 22:15:00.415472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.185 #35 NEW cov: 12462 ft: 14805 corp: 21/563b lim: 35 exec/s: 35 rss: 75Mb L: 27/33 MS: 1 ChangeBinInt- 00:07:41.185 [2024-10-29 22:15:00.485481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.485517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.485617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.485644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.485736] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.485753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.185 #36 NEW cov: 12462 ft: 14850 corp: 22/590b lim: 35 exec/s: 36 rss: 75Mb L: 27/33 MS: 1 ChangeByte- 00:07:41.185 [2024-10-29 22:15:00.536023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.536059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.536148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.536169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.536273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.536293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.536395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.536415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.185 #37 NEW cov: 12462 ft: 14880 corp: 23/619b lim: 35 exec/s: 37 rss: 75Mb L: 29/33 MS: 1 ChangeByte- 00:07:41.185 [2024-10-29 22:15:00.586235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.586267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.586371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.586389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.586486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.586506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.185 [2024-10-29 22:15:00.586601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.185 [2024-10-29 22:15:00.586618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.186 #38 NEW cov: 12462 ft: 14908 corp: 24/650b lim: 35 exec/s: 38 rss: 75Mb L: 31/33 MS: 1 InsertByte- 00:07:41.186 [2024-10-29 22:15:00.656394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.656426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.656529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.656547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.656643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.656659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.656756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.656775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.186 #39 NEW cov: 12462 ft: 14918 corp: 25/679b lim: 35 exec/s: 39 rss: 75Mb L: 29/33 MS: 1 CopyPart- 00:07:41.186 [2024-10-29 22:15:00.706712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.706744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.706838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.706856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.706950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.706967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.186 [2024-10-29 22:15:00.707059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.186 [2024-10-29 22:15:00.707076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.445 #40 NEW cov: 12462 ft: 14921 corp: 26/708b lim: 35 exec/s: 40 rss: 75Mb L: 29/33 MS: 1 ChangeBinInt- 00:07:41.445 [2024-10-29 22:15:00.756809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.756842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.756933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.756953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.757055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.757072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.757146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.757166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.445 #41 NEW cov: 12462 ft: 14932 corp: 27/737b lim: 35 exec/s: 41 rss: 75Mb L: 29/33 MS: 1 ShuffleBytes- 00:07:41.445 [2024-10-29 22:15:00.837447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.837480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.837585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.837603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.837691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.837708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.837806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.837827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.837927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.837944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.445 #42 NEW cov: 12462 ft: 14988 corp: 28/772b lim: 35 exec/s: 42 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:41.445 [2024-10-29 22:15:00.907370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.907413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.907508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.907527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.907621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.907637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.445 [2024-10-29 22:15:00.907726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.445 [2024-10-29 22:15:00.907744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.445 #43 NEW cov: 12462 ft: 14991 corp: 29/806b lim: 35 exec/s: 21 rss: 75Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:41.445 #43 DONE cov: 12462 ft: 14991 corp: 29/806b lim: 35 exec/s: 21 rss: 75Mb 00:07:41.445 Done 43 runs in 2 second(s) 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.704 22:15:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:41.704 [2024-10-29 22:15:01.092656] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:41.704 [2024-10-29 22:15:01.092732] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105302 ] 00:07:41.962 [2024-10-29 22:15:01.303628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.963 [2024-10-29 22:15:01.344853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.963 [2024-10-29 22:15:01.404710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.963 [2024-10-29 22:15:01.420882] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:41.963 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.963 INFO: Seed: 1303961082 00:07:41.963 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:41.963 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:41.963 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:41.963 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.963 #2 INITED exec/s: 0 rss: 66Mb 00:07:41.963 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.963 This may also happen if the target rejected all inputs we tried so far 00:07:41.963 [2024-10-29 22:15:01.486409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.963 [2024-10-29 22:15:01.486451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.478 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:42.478 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.478 #15 NEW cov: 12188 ft: 12189 corp: 2/11b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:42.478 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:42.478 #33 NEW cov: 12332 ft: 12828 corp: 3/22b lim: 35 exec/s: 0 rss: 74Mb L: 11/11 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:07:42.478 [2024-10-29 22:15:01.909079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.478 [2024-10-29 22:15:01.909119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.478 #34 NEW cov: 12338 ft: 13055 corp: 4/32b lim: 35 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 ShuffleBytes- 00:07:42.478 [2024-10-29 22:15:01.979851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.478 [2024-10-29 22:15:01.979879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.478 [2024-10-29 22:15:01.980073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000245 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.478 [2024-10-29 22:15:01.980089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.736 #35 NEW cov: 12423 ft: 13694 corp: 5/53b lim: 35 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 CrossOver- 00:07:42.736 [2024-10-29 22:15:02.040250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.736 [2024-10-29 22:15:02.040277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.736 [2024-10-29 22:15:02.040485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.040502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.737 #41 NEW cov: 12423 ft: 13812 corp: 6/74b lim: 35 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ChangeBinInt- 00:07:42.737 [2024-10-29 22:15:02.109882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.109910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.737 #42 NEW cov: 12423 ft: 13865 corp: 7/86b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 CrossOver- 00:07:42.737 [2024-10-29 22:15:02.180169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.180198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.737 #43 NEW cov: 12423 ft: 13941 corp: 8/97b lim: 35 exec/s: 0 rss: 74Mb L: 11/21 MS: 1 InsertByte- 00:07:42.737 [2024-10-29 22:15:02.231033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.231061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.737 [2024-10-29 22:15:02.231151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.231169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.737 [2024-10-29 22:15:02.231268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.737 [2024-10-29 22:15:02.231288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.737 #45 NEW cov: 12423 ft: 14108 corp: 9/124b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:42.996 [2024-10-29 22:15:02.281185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.281213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.281309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.281326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.281419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.281434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.996 #46 NEW cov: 12423 ft: 14153 corp: 10/151b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:07:42.996 [2024-10-29 22:15:02.351312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.351340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.351444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.351460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.351552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.351569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.996 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.996 #47 NEW cov: 12446 ft: 14269 corp: 11/172b lim: 35 exec/s: 0 rss: 74Mb L: 21/27 MS: 1 EraseBytes- 00:07:42.996 [2024-10-29 22:15:02.401619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.401646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.401744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.401760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.996 [2024-10-29 22:15:02.401860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.401876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.996 #48 NEW cov: 12446 ft: 14308 corp: 12/193b lim: 35 exec/s: 0 rss: 74Mb L: 21/27 MS: 1 ChangeBit- 00:07:42.996 [2024-10-29 22:15:02.471340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.996 [2024-10-29 22:15:02.471366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.996 #49 NEW cov: 12446 ft: 14328 corp: 13/203b lim: 35 exec/s: 49 rss: 74Mb L: 10/27 MS: 1 CrossOver- 00:07:43.255 [2024-10-29 22:15:02.521954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000645 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.521981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.255 #50 NEW cov: 12446 ft: 14499 corp: 14/217b lim: 35 exec/s: 50 rss: 74Mb L: 14/27 MS: 1 CrossOver- 00:07:43.255 [2024-10-29 22:15:02.592354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.592382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.255 [2024-10-29 22:15:02.592487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.592503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.255 [2024-10-29 22:15:02.592604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.592621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.255 #51 NEW cov: 12446 ft: 14509 corp: 15/244b lim: 35 exec/s: 51 rss: 74Mb L: 27/27 MS: 1 CopyPart- 00:07:43.255 [2024-10-29 22:15:02.642503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.642530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.255 [2024-10-29 22:15:02.642632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000610 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.642648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.255 [2024-10-29 22:15:02.642741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.642756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.255 #52 NEW cov: 12446 ft: 14524 corp: 16/265b lim: 35 exec/s: 52 rss: 74Mb L: 21/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:43.255 [2024-10-29 22:15:02.712070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.255 [2024-10-29 22:15:02.712098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.255 #53 NEW cov: 12446 ft: 14536 corp: 17/277b lim: 35 exec/s: 53 rss: 74Mb L: 12/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:43.514 [2024-10-29 22:15:02.783983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.784011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.784118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000610 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.784135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.784232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.784248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.784358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.784374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.784475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.784490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.514 #54 NEW cov: 12446 ft: 15105 corp: 18/312b lim: 35 exec/s: 54 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:43.514 [2024-10-29 22:15:02.853572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.853599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.853712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.853729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.853843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.853859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.514 #55 NEW cov: 12446 ft: 15126 corp: 19/333b lim: 35 exec/s: 55 rss: 74Mb L: 21/35 MS: 1 ShuffleBytes- 00:07:43.514 [2024-10-29 22:15:02.903594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.903620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.514 [2024-10-29 22:15:02.903726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.514 [2024-10-29 22:15:02.903743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.514 #56 NEW cov: 12446 ft: 15146 corp: 20/352b lim: 35 exec/s: 56 rss: 74Mb L: 19/35 MS: 1 EraseBytes- 00:07:43.515 [2024-10-29 22:15:02.973529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.515 [2024-10-29 22:15:02.973557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.515 #57 NEW cov: 12446 ft: 15167 corp: 21/363b lim: 35 exec/s: 57 rss: 74Mb L: 11/35 MS: 1 CrossOver- 00:07:43.515 [2024-10-29 22:15:03.023682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.515 [2024-10-29 22:15:03.023707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.773 #58 NEW cov: 12446 ft: 15197 corp: 22/373b lim: 35 exec/s: 58 rss: 74Mb L: 10/35 MS: 1 ChangeByte- 00:07:43.773 [2024-10-29 22:15:03.074271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.074301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.773 #59 NEW cov: 12446 ft: 15247 corp: 23/387b lim: 35 exec/s: 59 rss: 74Mb L: 14/35 MS: 1 ChangeBinInt- 00:07:43.773 [2024-10-29 22:15:03.144843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.144869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.144977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.144993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.145086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.145101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.773 #60 NEW cov: 12446 ft: 15263 corp: 24/414b lim: 35 exec/s: 60 rss: 74Mb L: 27/35 MS: 1 ChangeBit- 00:07:43.773 [2024-10-29 22:15:03.195717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.195744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.195841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000610 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.195858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.195970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.195986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.196091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.196107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.773 [2024-10-29 22:15:03.196207] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:8 cdw10:00000610 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.196222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.773 #61 NEW cov: 12446 ft: 15279 corp: 25/449b lim: 35 exec/s: 61 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:43.773 [2024-10-29 22:15:03.275647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.773 [2024-10-29 22:15:03.275674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.774 [2024-10-29 22:15:03.275781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.774 [2024-10-29 22:15:03.275797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.774 [2024-10-29 22:15:03.275892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.774 [2024-10-29 22:15:03.275909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.033 #62 NEW cov: 12446 ft: 15285 corp: 26/476b lim: 35 exec/s: 62 rss: 74Mb L: 27/35 MS: 1 ChangeByte- 00:07:44.033 [2024-10-29 22:15:03.325266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.033 [2024-10-29 22:15:03.325295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.033 #63 NEW cov: 12446 ft: 15300 corp: 27/488b lim: 35 exec/s: 63 rss: 75Mb L: 12/35 MS: 1 InsertByte- 00:07:44.033 [2024-10-29 22:15:03.395793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.033 [2024-10-29 22:15:03.395822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.033 #64 NEW cov: 12446 ft: 15313 corp: 28/497b lim: 35 exec/s: 64 rss: 75Mb L: 9/35 MS: 1 EraseBytes- 00:07:44.033 [2024-10-29 22:15:03.466030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.033 [2024-10-29 22:15:03.466058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.033 #65 NEW cov: 12446 ft: 15318 corp: 29/507b lim: 35 exec/s: 32 rss: 75Mb L: 10/35 MS: 1 ChangeBinInt- 00:07:44.033 #65 DONE cov: 12446 ft: 15318 corp: 29/507b lim: 35 exec/s: 32 rss: 75Mb 00:07:44.033 ###### Recommended dictionary. ###### 00:07:44.033 "\000\000\000\000\000\000\000\020" # Uses: 3 00:07:44.033 ###### End of recommended dictionary. ###### 00:07:44.033 Done 65 runs in 2 second(s) 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.293 22:15:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:44.293 [2024-10-29 22:15:03.639861] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:44.293 [2024-10-29 22:15:03.639928] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105717 ] 00:07:44.552 [2024-10-29 22:15:03.836584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.552 [2024-10-29 22:15:03.875713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.552 [2024-10-29 22:15:03.934888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.552 [2024-10-29 22:15:03.951075] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:44.552 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.552 INFO: Seed: 3832957548 00:07:44.552 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:44.552 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:44.552 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:44.552 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.552 #2 INITED exec/s: 0 rss: 66Mb 00:07:44.552 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.552 This may also happen if the target rejected all inputs we tried so far 00:07:44.552 [2024-10-29 22:15:04.018144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17651002168549569780 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.552 [2024-10-29 22:15:04.018183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.552 [2024-10-29 22:15:04.018288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17651002172490708212 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.552 [2024-10-29 22:15:04.018313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.119 NEW_FUNC[1/716]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:45.119 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.119 #5 NEW cov: 12302 ft: 12307 corp: 2/43b lim: 105 exec/s: 0 rss: 74Mb L: 42/42 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:45.119 [2024-10-29 22:15:04.378994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17651002168549569780 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.379047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.119 #6 NEW cov: 12423 ft: 13389 corp: 3/84b lim: 105 exec/s: 0 rss: 74Mb L: 41/42 MS: 1 EraseBytes- 00:07:45.119 [2024-10-29 22:15:04.459512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627076795529164020 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.459543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.459598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17651002172490708212 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.459617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.119 #7 NEW cov: 12429 ft: 13549 corp: 4/127b lim: 105 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 InsertByte- 00:07:45.119 [2024-10-29 22:15:04.509644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17651002168549569780 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.509672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.509740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17592736852311602420 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.509757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.119 #8 NEW cov: 12514 ft: 13767 corp: 5/169b lim: 105 exec/s: 0 rss: 74Mb L: 42/43 MS: 1 InsertByte- 00:07:45.119 [2024-10-29 22:15:04.580233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.580267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.580329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.580346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.580402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.580422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.119 #10 NEW cov: 12514 ft: 14221 corp: 6/246b lim: 105 exec/s: 0 rss: 74Mb L: 77/77 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:45.119 [2024-10-29 22:15:04.630871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.630902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.630993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.631015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.631095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.631113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.119 [2024-10-29 22:15:04.631190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.119 [2024-10-29 22:15:04.631207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.378 #11 NEW cov: 12514 ft: 14835 corp: 7/344b lim: 105 exec/s: 0 rss: 74Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:45.378 [2024-10-29 22:15:04.701056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.701087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.701168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.701189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.701264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.701283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.701373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.701393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.378 #12 NEW cov: 12514 ft: 14927 corp: 8/429b lim: 105 exec/s: 0 rss: 74Mb L: 85/98 MS: 1 CMP- DE: "\217\331\012\\G\3455\000"- 00:07:45.378 [2024-10-29 22:15:04.751313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.751342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.751440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.751459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.751541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.751559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.751649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.751669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.378 #13 NEW cov: 12514 ft: 14978 corp: 9/514b lim: 105 exec/s: 0 rss: 74Mb L: 85/98 MS: 1 ChangeBinInt- 00:07:45.378 [2024-10-29 22:15:04.821008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.378 [2024-10-29 22:15:04.821037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.378 [2024-10-29 22:15:04.821104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.379 [2024-10-29 22:15:04.821121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.379 #14 NEW cov: 12514 ft: 14987 corp: 10/569b lim: 105 exec/s: 0 rss: 74Mb L: 55/98 MS: 1 CrossOver- 00:07:45.379 [2024-10-29 22:15:04.871781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.379 [2024-10-29 22:15:04.871812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.379 [2024-10-29 22:15:04.871882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.379 [2024-10-29 22:15:04.871903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.379 [2024-10-29 22:15:04.871970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.379 [2024-10-29 22:15:04.871989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.379 [2024-10-29 22:15:04.872086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.379 [2024-10-29 22:15:04.872104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.639 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:45.639 #15 NEW cov: 12537 ft: 15043 corp: 11/655b lim: 105 exec/s: 0 rss: 74Mb L: 86/98 MS: 1 InsertByte- 00:07:45.639 [2024-10-29 22:15:04.951503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9868782012453352692 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:04.951535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.639 [2024-10-29 22:15:04.951592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17650774573583758580 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:04.951611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.639 #16 NEW cov: 12537 ft: 15094 corp: 12/698b lim: 105 exec/s: 16 rss: 74Mb L: 43/98 MS: 1 InsertByte- 00:07:45.639 [2024-10-29 22:15:05.022411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.022443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.639 [2024-10-29 22:15:05.022511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.022530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.639 [2024-10-29 22:15:05.022598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.022616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.639 [2024-10-29 22:15:05.022708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.022726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.639 #17 NEW cov: 12537 ft: 15115 corp: 13/796b lim: 105 exec/s: 17 rss: 74Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:45.639 [2024-10-29 22:15:05.091762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17651002168549569780 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.091791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.639 #18 NEW cov: 12537 ft: 15170 corp: 14/837b lim: 105 exec/s: 18 rss: 74Mb L: 41/98 MS: 1 ChangeBinInt- 00:07:45.639 [2024-10-29 22:15:05.141957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:168493056 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.639 [2024-10-29 22:15:05.141987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.899 #19 NEW cov: 12537 ft: 15188 corp: 15/878b lim: 105 exec/s: 19 rss: 75Mb L: 41/98 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\004"- 00:07:45.899 [2024-10-29 22:15:05.212160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627076795529164020 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.212193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.899 #20 NEW cov: 12537 ft: 15254 corp: 16/917b lim: 105 exec/s: 20 rss: 75Mb L: 39/98 MS: 1 EraseBytes- 00:07:45.899 [2024-10-29 22:15:05.283662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.283691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.899 [2024-10-29 22:15:05.283792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.283809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.899 [2024-10-29 22:15:05.283892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.283911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.899 [2024-10-29 22:15:05.283998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:17651002168381014016 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.284019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.899 [2024-10-29 22:15:05.284118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:17651002172490708212 len:62465 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.284134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:45.899 #21 NEW cov: 12537 ft: 15299 corp: 17/1022b lim: 105 exec/s: 21 rss: 75Mb L: 105/105 MS: 1 CrossOver- 00:07:45.899 [2024-10-29 22:15:05.332717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627076795529164020 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.332745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.899 #22 NEW cov: 12537 ft: 15355 corp: 18/1061b lim: 105 exec/s: 22 rss: 75Mb L: 39/105 MS: 1 ChangeBinInt- 00:07:45.899 [2024-10-29 22:15:05.403249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.403277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.899 [2024-10-29 22:15:05.403363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:131072 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.899 [2024-10-29 22:15:05.403380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.158 #23 NEW cov: 12537 ft: 15377 corp: 19/1116b lim: 105 exec/s: 23 rss: 75Mb L: 55/105 MS: 1 ChangeBinInt- 00:07:46.158 [2024-10-29 22:15:05.474157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.474188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.474253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.474272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.474360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.474379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.474473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.474492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.158 #24 NEW cov: 12537 ft: 15384 corp: 20/1218b lim: 105 exec/s: 24 rss: 75Mb L: 102/105 MS: 1 CrossOver- 00:07:46.158 [2024-10-29 22:15:05.543805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:861582757712360692 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.543835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.543890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17651002172490708212 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.543906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.158 #25 NEW cov: 12537 ft: 15421 corp: 21/1261b lim: 105 exec/s: 25 rss: 75Mb L: 43/105 MS: 1 CopyPart- 00:07:46.158 [2024-10-29 22:15:05.594289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.594321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.594407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:55563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.594425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.158 [2024-10-29 22:15:05.594503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.594521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.158 #26 NEW cov: 12537 ft: 15461 corp: 22/1328b lim: 105 exec/s: 26 rss: 75Mb L: 67/105 MS: 1 CrossOver- 00:07:46.158 [2024-10-29 22:15:05.643872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627075970895443188 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.158 [2024-10-29 22:15:05.643900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.158 #27 NEW cov: 12537 ft: 15474 corp: 23/1368b lim: 105 exec/s: 27 rss: 75Mb L: 40/105 MS: 1 InsertByte- 00:07:46.416 [2024-10-29 22:15:05.694395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:861582757712360692 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.694424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.694490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17651002172490708208 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.694509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.417 #28 NEW cov: 12537 ft: 15489 corp: 24/1411b lim: 105 exec/s: 28 rss: 75Mb L: 43/105 MS: 1 ChangeBit- 00:07:46.417 [2024-10-29 22:15:05.765249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627076795529164020 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.765275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.765396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744070085672959 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.765416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.765514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.765534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.765621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18374686483966590975 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.765640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.417 #29 NEW cov: 12537 ft: 15500 corp: 25/1496b lim: 105 exec/s: 29 rss: 75Mb L: 85/105 MS: 1 InsertRepeatedBytes- 00:07:46.417 [2024-10-29 22:15:05.835449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:587857920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.835476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.835553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.835570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.835651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.835669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.417 [2024-10-29 22:15:05.835761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.835781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.417 #30 NEW cov: 12537 ft: 15524 corp: 26/1595b lim: 105 exec/s: 30 rss: 75Mb L: 99/105 MS: 1 CopyPart- 00:07:46.417 [2024-10-29 22:15:05.884863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17627075970895443188 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.417 [2024-10-29 22:15:05.884891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.417 #31 NEW cov: 12537 ft: 15529 corp: 27/1635b lim: 105 exec/s: 31 rss: 75Mb L: 40/105 MS: 1 ChangeByte- 00:07:46.678 [2024-10-29 22:15:05.955265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17626851670523376884 len:62709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.678 [2024-10-29 22:15:05.955293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.678 #32 pulse cov: 12537 ft: 15543 corp: 27/1635b lim: 105 exec/s: 16 rss: 75Mb 00:07:46.678 #32 NEW cov: 12537 ft: 15543 corp: 28/1675b lim: 105 exec/s: 16 rss: 75Mb L: 40/105 MS: 1 ChangeBinInt- 00:07:46.678 #32 DONE cov: 12537 ft: 15543 corp: 28/1675b lim: 105 exec/s: 16 rss: 75Mb 00:07:46.678 ###### Recommended dictionary. ###### 00:07:46.678 "\217\331\012\\G\3455\000" # Uses: 0 00:07:46.678 "\000\000\000\000\000\000\000\004" # Uses: 0 00:07:46.678 ###### End of recommended dictionary. ###### 00:07:46.678 Done 32 runs in 2 second(s) 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:46.678 22:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:46.678 [2024-10-29 22:15:06.153014] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:46.678 [2024-10-29 22:15:06.153082] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106374 ] 00:07:46.937 [2024-10-29 22:15:06.348453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.937 [2024-10-29 22:15:06.387463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.937 [2024-10-29 22:15:06.446678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.196 [2024-10-29 22:15:06.462859] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:47.196 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.196 INFO: Seed: 2049990123 00:07:47.196 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:47.196 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:47.196 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:47.196 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.196 #2 INITED exec/s: 0 rss: 66Mb 00:07:47.196 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.196 This may also happen if the target rejected all inputs we tried so far 00:07:47.196 [2024-10-29 22:15:06.530554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.196 [2024-10-29 22:15:06.530590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.196 [2024-10-29 22:15:06.530654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.196 [2024-10-29 22:15:06.530674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.196 [2024-10-29 22:15:06.530714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.196 [2024-10-29 22:15:06.530733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.196 [2024-10-29 22:15:06.530826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.196 [2024-10-29 22:15:06.530841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.455 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:47.455 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.455 #4 NEW cov: 12331 ft: 12331 corp: 2/105b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:47.455 [2024-10-29 22:15:06.871573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.871620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.455 [2024-10-29 22:15:06.871696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.871717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.455 [2024-10-29 22:15:06.871817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.871835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.455 [2024-10-29 22:15:06.871928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.871945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.455 #5 NEW cov: 12444 ft: 13013 corp: 3/209b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:47.455 [2024-10-29 22:15:06.941444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.941475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.455 [2024-10-29 22:15:06.941542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.941559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.455 [2024-10-29 22:15:06.941627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.455 [2024-10-29 22:15:06.941646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.714 #11 NEW cov: 12450 ft: 13469 corp: 4/290b lim: 120 exec/s: 0 rss: 74Mb L: 81/104 MS: 1 EraseBytes- 00:07:47.714 [2024-10-29 22:15:07.011838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.714 [2024-10-29 22:15:07.011864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.714 [2024-10-29 22:15:07.011943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.714 [2024-10-29 22:15:07.011962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.714 [2024-10-29 22:15:07.012052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.714 [2024-10-29 22:15:07.012069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.012160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.012178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.715 #12 NEW cov: 12535 ft: 13790 corp: 5/394b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 ChangeByte- 00:07:47.715 [2024-10-29 22:15:07.061830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:53713 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.061861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.061954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.061968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.062057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.062075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.715 #13 NEW cov: 12535 ft: 13859 corp: 6/475b lim: 120 exec/s: 0 rss: 74Mb L: 81/104 MS: 1 ChangeBinInt- 00:07:47.715 [2024-10-29 22:15:07.132422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.132452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.132533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.132550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.132631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.132650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.132745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.132765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.715 #14 NEW cov: 12535 ft: 13975 corp: 7/579b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 ShuffleBytes- 00:07:47.715 [2024-10-29 22:15:07.202692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.202720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.202792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.202813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.202893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3400171741831442223 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.202912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.715 [2024-10-29 22:15:07.203004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.715 [2024-10-29 22:15:07.203021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.974 #15 NEW cov: 12535 ft: 14028 corp: 8/683b lim: 120 exec/s: 0 rss: 75Mb L: 104/104 MS: 1 ChangeBinInt- 00:07:47.974 [2024-10-29 22:15:07.272014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988122768846639 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.272045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.974 #17 NEW cov: 12535 ft: 14925 corp: 9/717b lim: 120 exec/s: 0 rss: 75Mb L: 34/104 MS: 2 CopyPart-CrossOver- 00:07:47.974 [2024-10-29 22:15:07.332376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.332406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.974 #22 NEW cov: 12535 ft: 14983 corp: 10/758b lim: 120 exec/s: 0 rss: 75Mb L: 41/104 MS: 5 InsertByte-InsertByte-ChangeBit-ChangeBit-CrossOver- 00:07:47.974 [2024-10-29 22:15:07.383239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.383267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.974 [2024-10-29 22:15:07.383352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.383369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.974 [2024-10-29 22:15:07.383453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.383472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.974 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.974 #23 NEW cov: 12558 ft: 15041 corp: 11/839b lim: 120 exec/s: 0 rss: 75Mb L: 81/104 MS: 1 ChangeByte- 00:07:47.974 [2024-10-29 22:15:07.433082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.974 [2024-10-29 22:15:07.433111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.974 #24 NEW cov: 12558 ft: 15068 corp: 12/879b lim: 120 exec/s: 0 rss: 75Mb L: 40/104 MS: 1 CrossOver- 00:07:48.233 [2024-10-29 22:15:07.503794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.503827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.233 [2024-10-29 22:15:07.503932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.503951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.233 #30 NEW cov: 12558 ft: 15399 corp: 13/930b lim: 120 exec/s: 30 rss: 75Mb L: 51/104 MS: 1 EraseBytes- 00:07:48.233 [2024-10-29 22:15:07.583911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3458764510317185583 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.583944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.233 #35 NEW cov: 12558 ft: 15457 corp: 14/962b lim: 120 exec/s: 35 rss: 75Mb L: 32/104 MS: 5 EraseBytes-ChangeBit-CrossOver-PersAutoDict-CrossOver- DE: "\377\377\377\377\377\377\377\377"- 00:07:48.233 [2024-10-29 22:15:07.664490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.664526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.233 [2024-10-29 22:15:07.664635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.664650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.233 #36 NEW cov: 12558 ft: 15502 corp: 15/1012b lim: 120 exec/s: 36 rss: 75Mb L: 50/104 MS: 1 EraseBytes- 00:07:48.233 [2024-10-29 22:15:07.715462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.715491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.233 [2024-10-29 22:15:07.715572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.715591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.233 [2024-10-29 22:15:07.715675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3400171741831442223 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.233 [2024-10-29 22:15:07.715693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.233 [2024-10-29 22:15:07.715780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.234 [2024-10-29 22:15:07.715798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:48.234 #37 NEW cov: 12558 ft: 15518 corp: 16/1116b lim: 120 exec/s: 37 rss: 75Mb L: 104/104 MS: 1 ChangeBit- 00:07:48.493 [2024-10-29 22:15:07.764644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.764672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.493 #38 NEW cov: 12558 ft: 15540 corp: 17/1156b lim: 120 exec/s: 38 rss: 75Mb L: 40/104 MS: 1 CMP- DE: "\001\000\177\302\004\022\346\253"- 00:07:48.493 [2024-10-29 22:15:07.815495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:53713 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.815521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.493 [2024-10-29 22:15:07.815593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.815610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.493 [2024-10-29 22:15:07.815696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.815715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.493 #39 NEW cov: 12558 ft: 15547 corp: 18/1237b lim: 120 exec/s: 39 rss: 75Mb L: 81/104 MS: 1 CopyPart- 00:07:48.493 [2024-10-29 22:15:07.886147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399987023877975855 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.886175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.493 [2024-10-29 22:15:07.886243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.493 [2024-10-29 22:15:07.886261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.494 [2024-10-29 22:15:07.886349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3400171741831442223 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:07.886367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.494 [2024-10-29 22:15:07.886461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:07.886478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:48.494 #40 NEW cov: 12558 ft: 15573 corp: 19/1341b lim: 120 exec/s: 40 rss: 75Mb L: 104/104 MS: 1 ChangeByte- 00:07:48.494 [2024-10-29 22:15:07.956065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:07.956091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.494 [2024-10-29 22:15:07.956167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12072 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:07.956185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.494 [2024-10-29 22:15:07.956274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:07.956290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.494 #41 NEW cov: 12558 ft: 15588 corp: 20/1422b lim: 120 exec/s: 41 rss: 75Mb L: 81/104 MS: 1 ChangeBit- 00:07:48.494 [2024-10-29 22:15:08.005865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:08.005895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.494 [2024-10-29 22:15:08.005954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.494 [2024-10-29 22:15:08.005972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.753 #42 NEW cov: 12558 ft: 15604 corp: 21/1473b lim: 120 exec/s: 42 rss: 75Mb L: 51/104 MS: 1 ShuffleBytes- 00:07:48.753 [2024-10-29 22:15:08.076633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.076659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.076739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.076758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.076836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.076854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.753 #43 NEW cov: 12558 ft: 15616 corp: 22/1552b lim: 120 exec/s: 43 rss: 75Mb L: 79/104 MS: 1 EraseBytes- 00:07:48.753 [2024-10-29 22:15:08.127326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.127352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.127436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.127453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.127534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.127552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.127637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070219890687 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.127654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:48.753 #44 NEW cov: 12558 ft: 15626 corp: 23/1656b lim: 120 exec/s: 44 rss: 75Mb L: 104/104 MS: 1 ChangeByte- 00:07:48.753 [2024-10-29 22:15:08.176822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.176852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.176923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.176941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.753 #45 NEW cov: 12558 ft: 15657 corp: 24/1706b lim: 120 exec/s: 45 rss: 75Mb L: 50/104 MS: 1 ShuffleBytes- 00:07:48.753 [2024-10-29 22:15:08.247168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.247199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.753 [2024-10-29 22:15:08.247254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13672292666396491197 len:48432 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.753 [2024-10-29 22:15:08.247273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.011 #46 NEW cov: 12558 ft: 15669 corp: 25/1769b lim: 120 exec/s: 46 rss: 75Mb L: 63/104 MS: 1 InsertRepeatedBytes- 00:07:49.011 [2024-10-29 22:15:08.318116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.318145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.318221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:293550547863109570 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.318241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.318327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.318348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.318429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3399988123389603631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.318444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.011 #47 NEW cov: 12558 ft: 15691 corp: 26/1881b lim: 120 exec/s: 47 rss: 75Mb L: 112/112 MS: 1 PersAutoDict- DE: "\001\000\177\302\004\022\346\253"- 00:07:49.011 [2024-10-29 22:15:08.388299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.388328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.388405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.388422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.388510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3458764510317195055 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.388526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.388621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3399988123389603631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.388637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.011 #48 NEW cov: 12558 ft: 15707 corp: 27/1992b lim: 120 exec/s: 48 rss: 76Mb L: 111/112 MS: 1 CopyPart- 00:07:49.011 [2024-10-29 22:15:08.437914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.437942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.438013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.438031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.011 #49 NEW cov: 12558 ft: 15713 corp: 28/2042b lim: 120 exec/s: 49 rss: 76Mb L: 50/112 MS: 1 ChangeByte- 00:07:49.011 [2024-10-29 22:15:08.508057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3399988123389603631 len:12080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.508086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.011 [2024-10-29 22:15:08.508145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3399988123389603631 len:32560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.011 [2024-10-29 22:15:08.508165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.270 #50 NEW cov: 12558 ft: 15732 corp: 29/2091b lim: 120 exec/s: 25 rss: 76Mb L: 49/112 MS: 1 CrossOver- 00:07:49.270 #50 DONE cov: 12558 ft: 15732 corp: 29/2091b lim: 120 exec/s: 25 rss: 76Mb 00:07:49.270 ###### Recommended dictionary. ###### 00:07:49.270 "\377\377\377\377\377\377\377\377" # Uses: 1 00:07:49.270 "\001\000\177\302\004\022\346\253" # Uses: 1 00:07:49.270 ###### End of recommended dictionary. ###### 00:07:49.270 Done 50 runs in 2 second(s) 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.270 22:15:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:49.270 [2024-10-29 22:15:08.712677] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:49.270 [2024-10-29 22:15:08.712755] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106681 ] 00:07:49.528 [2024-10-29 22:15:08.919707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.528 [2024-10-29 22:15:08.959689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.528 [2024-10-29 22:15:09.019132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.528 [2024-10-29 22:15:09.035305] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:49.528 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.528 INFO: Seed: 328041513 00:07:49.787 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:49.787 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:49.787 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:49.787 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.787 #2 INITED exec/s: 0 rss: 66Mb 00:07:49.787 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.787 This may also happen if the target rejected all inputs we tried so far 00:07:49.787 [2024-10-29 22:15:09.100701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:49.787 [2024-10-29 22:15:09.100732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.787 [2024-10-29 22:15:09.100780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:49.787 [2024-10-29 22:15:09.100799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.045 NEW_FUNC[1/715]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:50.045 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.045 #11 NEW cov: 12274 ft: 12265 corp: 2/52b lim: 100 exec/s: 0 rss: 74Mb L: 51/51 MS: 4 InsertRepeatedBytes-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:50.045 [2024-10-29 22:15:09.441790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.045 [2024-10-29 22:15:09.441850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.045 [2024-10-29 22:15:09.441930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.045 [2024-10-29 22:15:09.441959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.045 #12 NEW cov: 12387 ft: 12826 corp: 3/103b lim: 100 exec/s: 0 rss: 74Mb L: 51/51 MS: 1 CrossOver- 00:07:50.045 [2024-10-29 22:15:09.501797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.045 [2024-10-29 22:15:09.501824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.045 [2024-10-29 22:15:09.501871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.045 [2024-10-29 22:15:09.501886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.045 [2024-10-29 22:15:09.501938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.045 [2024-10-29 22:15:09.501953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.045 #16 NEW cov: 12393 ft: 13522 corp: 4/165b lim: 100 exec/s: 0 rss: 74Mb L: 62/62 MS: 4 InsertRepeatedBytes-InsertByte-ChangeByte-CrossOver- 00:07:50.045 [2024-10-29 22:15:09.541876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.045 [2024-10-29 22:15:09.541904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.045 [2024-10-29 22:15:09.541939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.045 [2024-10-29 22:15:09.541953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.045 [2024-10-29 22:15:09.542002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.045 [2024-10-29 22:15:09.542017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.303 #17 NEW cov: 12478 ft: 13793 corp: 5/227b lim: 100 exec/s: 0 rss: 74Mb L: 62/62 MS: 1 ChangeByte- 00:07:50.303 [2024-10-29 22:15:09.602047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.303 [2024-10-29 22:15:09.602074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.602108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.303 [2024-10-29 22:15:09.602123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.602174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.303 [2024-10-29 22:15:09.602189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.303 #23 NEW cov: 12478 ft: 13868 corp: 6/289b lim: 100 exec/s: 0 rss: 74Mb L: 62/62 MS: 1 ShuffleBytes- 00:07:50.303 [2024-10-29 22:15:09.642038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.303 [2024-10-29 22:15:09.642065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.642111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.303 [2024-10-29 22:15:09.642125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.303 #24 NEW cov: 12478 ft: 13945 corp: 7/340b lim: 100 exec/s: 0 rss: 74Mb L: 51/62 MS: 1 ShuffleBytes- 00:07:50.303 [2024-10-29 22:15:09.702200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.303 [2024-10-29 22:15:09.702227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.702262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.303 [2024-10-29 22:15:09.702276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.303 #25 NEW cov: 12478 ft: 14051 corp: 8/391b lim: 100 exec/s: 0 rss: 74Mb L: 51/62 MS: 1 CopyPart- 00:07:50.303 [2024-10-29 22:15:09.762380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.303 [2024-10-29 22:15:09.762405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.762442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.303 [2024-10-29 22:15:09.762456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.303 #26 NEW cov: 12478 ft: 14131 corp: 9/442b lim: 100 exec/s: 0 rss: 74Mb L: 51/62 MS: 1 ShuffleBytes- 00:07:50.303 [2024-10-29 22:15:09.802483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.303 [2024-10-29 22:15:09.802509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.303 [2024-10-29 22:15:09.802544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.303 [2024-10-29 22:15:09.802558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.303 #27 NEW cov: 12478 ft: 14209 corp: 10/493b lim: 100 exec/s: 0 rss: 74Mb L: 51/62 MS: 1 ChangeBit- 00:07:50.561 [2024-10-29 22:15:09.842840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.561 [2024-10-29 22:15:09.842866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:09.842911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.561 [2024-10-29 22:15:09.842926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:09.842977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.561 [2024-10-29 22:15:09.842992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:09.843042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:50.561 [2024-10-29 22:15:09.843056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:50.561 #28 NEW cov: 12478 ft: 14558 corp: 11/581b lim: 100 exec/s: 0 rss: 74Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:50.561 [2024-10-29 22:15:09.902812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.561 [2024-10-29 22:15:09.902840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:09.902889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.561 [2024-10-29 22:15:09.902904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.561 #29 NEW cov: 12478 ft: 14641 corp: 12/632b lim: 100 exec/s: 0 rss: 74Mb L: 51/88 MS: 1 CopyPart- 00:07:50.561 [2024-10-29 22:15:09.942778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.561 [2024-10-29 22:15:09.942805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.561 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:50.561 #30 NEW cov: 12501 ft: 15011 corp: 13/666b lim: 100 exec/s: 0 rss: 75Mb L: 34/88 MS: 1 EraseBytes- 00:07:50.561 [2024-10-29 22:15:10.002990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.561 [2024-10-29 22:15:10.003017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.561 #31 NEW cov: 12501 ft: 15075 corp: 14/697b lim: 100 exec/s: 0 rss: 75Mb L: 31/88 MS: 1 EraseBytes- 00:07:50.561 [2024-10-29 22:15:10.043378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.561 [2024-10-29 22:15:10.043411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:10.043462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.561 [2024-10-29 22:15:10.043478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.561 [2024-10-29 22:15:10.043527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.561 [2024-10-29 22:15:10.043541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.819 #32 NEW cov: 12501 ft: 15086 corp: 15/769b lim: 100 exec/s: 32 rss: 75Mb L: 72/88 MS: 1 InsertRepeatedBytes- 00:07:50.819 [2024-10-29 22:15:10.103616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.819 [2024-10-29 22:15:10.103647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.819 [2024-10-29 22:15:10.103691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.819 [2024-10-29 22:15:10.103707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.819 [2024-10-29 22:15:10.103758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.819 [2024-10-29 22:15:10.103773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.819 [2024-10-29 22:15:10.103824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:50.819 [2024-10-29 22:15:10.103838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:50.819 #33 NEW cov: 12501 ft: 15178 corp: 16/865b lim: 100 exec/s: 33 rss: 75Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:07:50.819 [2024-10-29 22:15:10.163550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.819 [2024-10-29 22:15:10.163579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.820 [2024-10-29 22:15:10.163628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.820 [2024-10-29 22:15:10.163644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.820 #34 NEW cov: 12501 ft: 15214 corp: 17/917b lim: 100 exec/s: 34 rss: 75Mb L: 52/96 MS: 1 InsertByte- 00:07:50.820 [2024-10-29 22:15:10.203508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.820 [2024-10-29 22:15:10.203535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.820 #35 NEW cov: 12501 ft: 15254 corp: 18/955b lim: 100 exec/s: 35 rss: 75Mb L: 38/96 MS: 1 EraseBytes- 00:07:50.820 [2024-10-29 22:15:10.243771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.820 [2024-10-29 22:15:10.243797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.820 [2024-10-29 22:15:10.243833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.820 [2024-10-29 22:15:10.243848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.820 #36 NEW cov: 12501 ft: 15271 corp: 19/1006b lim: 100 exec/s: 36 rss: 75Mb L: 51/96 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:50.820 [2024-10-29 22:15:10.284131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:50.820 [2024-10-29 22:15:10.284158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.820 [2024-10-29 22:15:10.284204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:50.820 [2024-10-29 22:15:10.284219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.820 [2024-10-29 22:15:10.284269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:50.820 [2024-10-29 22:15:10.284285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.820 [2024-10-29 22:15:10.284338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:50.820 [2024-10-29 22:15:10.284353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:50.820 #37 NEW cov: 12501 ft: 15282 corp: 20/1100b lim: 100 exec/s: 37 rss: 75Mb L: 94/96 MS: 1 InsertRepeatedBytes- 00:07:51.079 [2024-10-29 22:15:10.344165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.079 [2024-10-29 22:15:10.344191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.344237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.079 [2024-10-29 22:15:10.344252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.344308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:51.079 [2024-10-29 22:15:10.344324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.079 #38 NEW cov: 12501 ft: 15333 corp: 21/1163b lim: 100 exec/s: 38 rss: 75Mb L: 63/96 MS: 1 InsertByte- 00:07:51.079 [2024-10-29 22:15:10.384052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.079 [2024-10-29 22:15:10.384077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.079 #39 NEW cov: 12501 ft: 15338 corp: 22/1189b lim: 100 exec/s: 39 rss: 75Mb L: 26/96 MS: 1 EraseBytes- 00:07:51.079 [2024-10-29 22:15:10.444333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.079 [2024-10-29 22:15:10.444361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.444396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.079 [2024-10-29 22:15:10.444410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.079 #40 NEW cov: 12501 ft: 15355 corp: 23/1248b lim: 100 exec/s: 40 rss: 75Mb L: 59/96 MS: 1 CopyPart- 00:07:51.079 [2024-10-29 22:15:10.484650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.079 [2024-10-29 22:15:10.484677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.484722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.079 [2024-10-29 22:15:10.484738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.484786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:51.079 [2024-10-29 22:15:10.484802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.484851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:51.079 [2024-10-29 22:15:10.484866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.079 #42 NEW cov: 12501 ft: 15381 corp: 24/1330b lim: 100 exec/s: 42 rss: 75Mb L: 82/96 MS: 2 EraseBytes-CrossOver- 00:07:51.079 [2024-10-29 22:15:10.544730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.079 [2024-10-29 22:15:10.544757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.544804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.079 [2024-10-29 22:15:10.544818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.079 [2024-10-29 22:15:10.544870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:51.079 [2024-10-29 22:15:10.544885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.079 #43 NEW cov: 12501 ft: 15405 corp: 25/1392b lim: 100 exec/s: 43 rss: 75Mb L: 62/96 MS: 1 ShuffleBytes- 00:07:51.358 [2024-10-29 22:15:10.604823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.604850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.604887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.358 [2024-10-29 22:15:10.604903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.358 #44 NEW cov: 12501 ft: 15419 corp: 26/1443b lim: 100 exec/s: 44 rss: 75Mb L: 51/96 MS: 1 ChangeBinInt- 00:07:51.358 [2024-10-29 22:15:10.645127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.645155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.645205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.358 [2024-10-29 22:15:10.645221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.645274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:51.358 [2024-10-29 22:15:10.645288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.645343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:51.358 [2024-10-29 22:15:10.645357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.358 #45 NEW cov: 12501 ft: 15454 corp: 27/1539b lim: 100 exec/s: 45 rss: 75Mb L: 96/96 MS: 1 ChangeBinInt- 00:07:51.358 [2024-10-29 22:15:10.705062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.705087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.705123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.358 [2024-10-29 22:15:10.705137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.358 #46 NEW cov: 12501 ft: 15463 corp: 28/1590b lim: 100 exec/s: 46 rss: 75Mb L: 51/96 MS: 1 ShuffleBytes- 00:07:51.358 [2024-10-29 22:15:10.745162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.745188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.745223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.358 [2024-10-29 22:15:10.745237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.358 #47 NEW cov: 12501 ft: 15482 corp: 29/1641b lim: 100 exec/s: 47 rss: 75Mb L: 51/96 MS: 1 ShuffleBytes- 00:07:51.358 [2024-10-29 22:15:10.785392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.785418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.785464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.358 [2024-10-29 22:15:10.785480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.358 [2024-10-29 22:15:10.785533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:51.358 [2024-10-29 22:15:10.785547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.358 #48 NEW cov: 12501 ft: 15493 corp: 30/1714b lim: 100 exec/s: 48 rss: 75Mb L: 73/96 MS: 1 CrossOver- 00:07:51.358 [2024-10-29 22:15:10.825278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.825308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.358 #49 NEW cov: 12501 ft: 15535 corp: 31/1749b lim: 100 exec/s: 49 rss: 75Mb L: 35/96 MS: 1 CrossOver- 00:07:51.358 [2024-10-29 22:15:10.865462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.358 [2024-10-29 22:15:10.865488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 #50 NEW cov: 12501 ft: 15563 corp: 32/1783b lim: 100 exec/s: 50 rss: 75Mb L: 34/96 MS: 1 ChangeByte- 00:07:51.667 [2024-10-29 22:15:10.905561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.667 [2024-10-29 22:15:10.905588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 #51 NEW cov: 12501 ft: 15608 corp: 33/1809b lim: 100 exec/s: 51 rss: 75Mb L: 26/96 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:51.667 [2024-10-29 22:15:10.965793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.667 [2024-10-29 22:15:10.965818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 [2024-10-29 22:15:10.965854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.667 [2024-10-29 22:15:10.965869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.667 #52 NEW cov: 12501 ft: 15615 corp: 34/1864b lim: 100 exec/s: 52 rss: 75Mb L: 55/96 MS: 1 CrossOver- 00:07:51.667 [2024-10-29 22:15:11.005803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.667 [2024-10-29 22:15:11.005829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 #53 NEW cov: 12501 ft: 15630 corp: 35/1890b lim: 100 exec/s: 53 rss: 75Mb L: 26/96 MS: 1 ChangeByte- 00:07:51.667 [2024-10-29 22:15:11.066077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.667 [2024-10-29 22:15:11.066102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 [2024-10-29 22:15:11.066139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.667 [2024-10-29 22:15:11.066153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.667 [2024-10-29 22:15:11.106195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:51.667 [2024-10-29 22:15:11.106220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.667 [2024-10-29 22:15:11.106260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:51.667 [2024-10-29 22:15:11.106275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.667 #55 NEW cov: 12501 ft: 15634 corp: 36/1936b lim: 100 exec/s: 27 rss: 75Mb L: 46/96 MS: 2 ChangeBit-EraseBytes- 00:07:51.667 #55 DONE cov: 12501 ft: 15634 corp: 36/1936b lim: 100 exec/s: 27 rss: 75Mb 00:07:51.667 ###### Recommended dictionary. ###### 00:07:51.667 "\377\377\377\377\377\377\377\377" # Uses: 1 00:07:51.667 ###### End of recommended dictionary. ###### 00:07:51.667 Done 55 runs in 2 second(s) 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.974 22:15:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:51.974 [2024-10-29 22:15:11.279809] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:51.974 [2024-10-29 22:15:11.279876] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107018 ] 00:07:51.974 [2024-10-29 22:15:11.485229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.272 [2024-10-29 22:15:11.525771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.272 [2024-10-29 22:15:11.585293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.272 [2024-10-29 22:15:11.601456] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:52.272 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.272 INFO: Seed: 2893039238 00:07:52.272 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:52.272 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:52.272 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:52.272 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.272 #2 INITED exec/s: 0 rss: 66Mb 00:07:52.272 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.272 This may also happen if the target rejected all inputs we tried so far 00:07:52.272 [2024-10-29 22:15:11.672366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817505862638041 len:55770 00:07:52.272 [2024-10-29 22:15:11.672402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.531 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:52.531 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.531 #8 NEW cov: 12226 ft: 12253 corp: 2/15b lim: 50 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:07:52.531 [2024-10-29 22:15:12.013388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255536601 len:55770 00:07:52.531 [2024-10-29 22:15:12.013429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.531 NEW_FUNC[1/1]: 0x483768 in malloc_completion_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:865 00:07:52.531 #10 NEW cov: 12365 ft: 12777 corp: 3/30b lim: 50 exec/s: 0 rss: 74Mb L: 15/15 MS: 2 ChangeBit-CrossOver- 00:07:52.790 [2024-10-29 22:15:12.063494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:55770 00:07:52.790 [2024-10-29 22:15:12.063524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.790 #11 NEW cov: 12371 ft: 13010 corp: 4/45b lim: 50 exec/s: 0 rss: 74Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:52.790 [2024-10-29 22:15:12.133949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:55770 00:07:52.790 [2024-10-29 22:15:12.133976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.790 [2024-10-29 22:15:12.134072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:61319595977342976 len:55563 00:07:52.790 [2024-10-29 22:15:12.134089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.790 #12 NEW cov: 12456 ft: 13552 corp: 5/65b lim: 50 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:52.790 [2024-10-29 22:15:12.204229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:55770 00:07:52.790 [2024-10-29 22:15:12.204256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.790 [2024-10-29 22:15:12.204321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:61601070954053632 len:55563 00:07:52.790 [2024-10-29 22:15:12.204348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.790 #13 NEW cov: 12456 ft: 13712 corp: 6/85b lim: 50 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:52.790 [2024-10-29 22:15:12.274487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817505862638041 len:55563 00:07:52.790 [2024-10-29 22:15:12.274518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.047 #19 NEW cov: 12456 ft: 13739 corp: 7/95b lim: 50 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 EraseBytes- 00:07:53.047 [2024-10-29 22:15:12.345267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.047 [2024-10-29 22:15:12.345295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.345369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:53.047 [2024-10-29 22:15:12.345386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.345469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:53.047 [2024-10-29 22:15:12.345488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.047 #20 NEW cov: 12456 ft: 14027 corp: 8/127b lim: 50 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:53.047 [2024-10-29 22:15:12.395147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:53722 00:07:53.047 [2024-10-29 22:15:12.395176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.047 #21 NEW cov: 12456 ft: 14068 corp: 9/142b lim: 50 exec/s: 0 rss: 74Mb L: 15/32 MS: 1 ChangeBinInt- 00:07:53.047 [2024-10-29 22:15:12.445854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.047 [2024-10-29 22:15:12.445882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.445958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744043644780543 len:65536 00:07:53.047 [2024-10-29 22:15:12.445975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.446050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:53.047 [2024-10-29 22:15:12.446066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.047 #22 NEW cov: 12456 ft: 14130 corp: 10/174b lim: 50 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:53.047 [2024-10-29 22:15:12.516139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.047 [2024-10-29 22:15:12.516165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.516222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18376093824490405887 len:65536 00:07:53.047 [2024-10-29 22:15:12.516239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.047 [2024-10-29 22:15:12.516327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:53.047 [2024-10-29 22:15:12.516346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.047 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:53.048 #23 NEW cov: 12479 ft: 14204 corp: 11/206b lim: 50 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:53.306 [2024-10-29 22:15:12.585851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9378613763055781767 len:34696 00:07:53.306 [2024-10-29 22:15:12.585880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.306 #27 NEW cov: 12479 ft: 14264 corp: 12/216b lim: 50 exec/s: 0 rss: 74Mb L: 10/32 MS: 4 InsertRepeatedBytes-ChangeBit-ChangeByte-InsertByte- 00:07:53.306 [2024-10-29 22:15:12.636592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7595718147065866601 len:26986 00:07:53.306 [2024-10-29 22:15:12.636622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.306 [2024-10-29 22:15:12.636685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7595718147998050665 len:26986 00:07:53.306 [2024-10-29 22:15:12.636704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.306 [2024-10-29 22:15:12.636788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7595718629034387817 len:2610 00:07:53.306 [2024-10-29 22:15:12.636808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.306 #32 NEW cov: 12479 ft: 14298 corp: 13/247b lim: 50 exec/s: 32 rss: 74Mb L: 31/32 MS: 5 CrossOver-InsertByte-CopyPart-CrossOver-InsertRepeatedBytes- 00:07:53.306 [2024-10-29 22:15:12.686508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.306 [2024-10-29 22:15:12.686539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.306 [2024-10-29 22:15:12.686594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:53.306 [2024-10-29 22:15:12.686615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.306 #33 NEW cov: 12479 ft: 14316 corp: 14/270b lim: 50 exec/s: 33 rss: 74Mb L: 23/32 MS: 1 EraseBytes- 00:07:53.306 [2024-10-29 22:15:12.736739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15769875096293418790 len:55770 00:07:53.306 [2024-10-29 22:15:12.736767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.306 #34 NEW cov: 12479 ft: 14335 corp: 15/285b lim: 50 exec/s: 34 rss: 74Mb L: 15/32 MS: 1 ChangeBinInt- 00:07:53.306 [2024-10-29 22:15:12.787405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.306 [2024-10-29 22:15:12.787438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.306 [2024-10-29 22:15:12.787499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744043644780543 len:65536 00:07:53.306 [2024-10-29 22:15:12.787516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.306 [2024-10-29 22:15:12.787573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18442240474082181119 len:65536 00:07:53.306 [2024-10-29 22:15:12.787593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.306 #35 NEW cov: 12479 ft: 14412 corp: 16/317b lim: 50 exec/s: 35 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:53.619 [2024-10-29 22:15:12.837523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7595718147065866601 len:26986 00:07:53.619 [2024-10-29 22:15:12.837555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:12.837619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7595718147998050665 len:26986 00:07:53.619 [2024-10-29 22:15:12.837635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:12.837710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7595718629034387817 len:2592 00:07:53.619 [2024-10-29 22:15:12.837727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.619 #41 NEW cov: 12479 ft: 14489 corp: 17/348b lim: 50 exec/s: 41 rss: 74Mb L: 31/32 MS: 1 ChangeBinInt- 00:07:53.619 [2024-10-29 22:15:12.907273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:55612 00:07:53.619 [2024-10-29 22:15:12.907308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.619 #42 NEW cov: 12479 ft: 14514 corp: 18/364b lim: 50 exec/s: 42 rss: 74Mb L: 16/32 MS: 1 InsertByte- 00:07:53.619 [2024-10-29 22:15:12.957779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817505862638041 len:55563 00:07:53.619 [2024-10-29 22:15:12.957809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:12.957886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:53.619 [2024-10-29 22:15:12.957905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.619 #43 NEW cov: 12479 ft: 14527 corp: 19/391b lim: 50 exec/s: 43 rss: 74Mb L: 27/32 MS: 1 InsertRepeatedBytes- 00:07:53.619 [2024-10-29 22:15:13.028265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7579392598416648553 len:26986 00:07:53.619 [2024-10-29 22:15:13.028295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:13.028360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7595718147998050665 len:26986 00:07:53.619 [2024-10-29 22:15:13.028376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:13.028445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7595718147998050665 len:55563 00:07:53.619 [2024-10-29 22:15:13.028464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.619 #44 NEW cov: 12479 ft: 14535 corp: 20/423b lim: 50 exec/s: 44 rss: 74Mb L: 32/32 MS: 1 InsertByte- 00:07:53.619 [2024-10-29 22:15:13.078865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:53.619 [2024-10-29 22:15:13.078891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:13.078987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:53.619 [2024-10-29 22:15:13.079006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:13.079090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:53.619 [2024-10-29 22:15:13.079109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.619 [2024-10-29 22:15:13.079193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:53.619 [2024-10-29 22:15:13.079210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.619 #45 NEW cov: 12479 ft: 14820 corp: 21/468b lim: 50 exec/s: 45 rss: 75Mb L: 45/45 MS: 1 CopyPart- 00:07:53.876 [2024-10-29 22:15:13.148580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4745063212097591769 len:55770 00:07:53.876 [2024-10-29 22:15:13.148608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.876 [2024-10-29 22:15:13.148669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069599133695 len:65536 00:07:53.876 [2024-10-29 22:15:13.148687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.876 #46 NEW cov: 12479 ft: 14895 corp: 22/496b lim: 50 exec/s: 46 rss: 75Mb L: 28/45 MS: 1 InsertByte- 00:07:53.876 [2024-10-29 22:15:13.218446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697816616804407769 len:55770 00:07:53.876 [2024-10-29 22:15:13.218474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.876 #48 NEW cov: 12479 ft: 14908 corp: 23/510b lim: 50 exec/s: 48 rss: 75Mb L: 14/45 MS: 2 EraseBytes-CopyPart- 00:07:53.876 [2024-10-29 22:15:13.268648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255490854 len:55770 00:07:53.876 [2024-10-29 22:15:13.268676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.876 #49 NEW cov: 12479 ft: 14918 corp: 24/523b lim: 50 exec/s: 49 rss: 75Mb L: 13/45 MS: 1 EraseBytes- 00:07:53.876 [2024-10-29 22:15:13.339434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7595718147065866601 len:26986 00:07:53.876 [2024-10-29 22:15:13.339461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.876 [2024-10-29 22:15:13.339521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7595718147998050665 len:26986 00:07:53.876 [2024-10-29 22:15:13.339538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.876 [2024-10-29 22:15:13.339608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7595718629034387817 len:2592 00:07:53.876 [2024-10-29 22:15:13.339626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.876 #50 NEW cov: 12479 ft: 14954 corp: 25/554b lim: 50 exec/s: 50 rss: 75Mb L: 31/45 MS: 1 ShuffleBytes- 00:07:54.134 [2024-10-29 22:15:13.409143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15769875096293418790 len:55770 00:07:54.134 [2024-10-29 22:15:13.409171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.134 #51 NEW cov: 12479 ft: 14966 corp: 26/570b lim: 50 exec/s: 51 rss: 75Mb L: 16/45 MS: 1 InsertByte- 00:07:54.134 [2024-10-29 22:15:13.479855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:54.134 [2024-10-29 22:15:13.479885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.479941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18377782704415440895 len:65536 00:07:54.134 [2024-10-29 22:15:13.479958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.480039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:54.134 [2024-10-29 22:15:13.480059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.134 #52 NEW cov: 12479 ft: 14991 corp: 27/608b lim: 50 exec/s: 52 rss: 75Mb L: 38/45 MS: 1 CopyPart- 00:07:54.134 [2024-10-29 22:15:13.529573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15769739856363202342 len:55770 00:07:54.134 [2024-10-29 22:15:13.529602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.134 #53 NEW cov: 12479 ft: 15003 corp: 28/624b lim: 50 exec/s: 53 rss: 75Mb L: 16/45 MS: 1 InsertByte- 00:07:54.134 [2024-10-29 22:15:13.579723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15697817502255536601 len:55770 00:07:54.134 [2024-10-29 22:15:13.579750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.134 #54 NEW cov: 12479 ft: 15008 corp: 29/639b lim: 50 exec/s: 54 rss: 75Mb L: 15/45 MS: 1 ChangeBit- 00:07:54.134 [2024-10-29 22:15:13.631029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:54.134 [2024-10-29 22:15:13.631058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.631138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:54.134 [2024-10-29 22:15:13.631158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.631228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65453 00:07:54.134 [2024-10-29 22:15:13.631245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.631342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12442509728149187756 len:44205 00:07:54.134 [2024-10-29 22:15:13.631360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.134 [2024-10-29 22:15:13.631455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:12442510084631473324 len:65536 00:07:54.134 [2024-10-29 22:15:13.631475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:54.134 #55 NEW cov: 12479 ft: 15047 corp: 30/689b lim: 50 exec/s: 27 rss: 75Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:54.134 #55 DONE cov: 12479 ft: 15047 corp: 30/689b lim: 50 exec/s: 27 rss: 75Mb 00:07:54.134 Done 55 runs in 2 second(s) 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.393 22:15:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:54.393 [2024-10-29 22:15:13.813705] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:54.393 [2024-10-29 22:15:13.813772] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107381 ] 00:07:54.652 [2024-10-29 22:15:14.016837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.652 [2024-10-29 22:15:14.057434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.652 [2024-10-29 22:15:14.116819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.652 [2024-10-29 22:15:14.132987] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:54.652 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.652 INFO: Seed: 1131053827 00:07:54.652 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:54.652 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:54.652 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:54.652 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.910 #2 INITED exec/s: 0 rss: 66Mb 00:07:54.910 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:54.910 This may also happen if the target rejected all inputs we tried so far 00:07:54.910 [2024-10-29 22:15:14.200869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:54.910 [2024-10-29 22:15:14.200910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.910 [2024-10-29 22:15:14.201008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:54.910 [2024-10-29 22:15:14.201025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.910 [2024-10-29 22:15:14.201119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:54.910 [2024-10-29 22:15:14.201138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.910 [2024-10-29 22:15:14.201235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:54.910 [2024-10-29 22:15:14.201253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.169 NEW_FUNC[1/717]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:55.169 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:55.169 #17 NEW cov: 12310 ft: 12311 corp: 2/89b lim: 90 exec/s: 0 rss: 74Mb L: 88/88 MS: 5 InsertByte-ChangeByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:55.169 [2024-10-29 22:15:14.541759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.169 [2024-10-29 22:15:14.541799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.541888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.169 [2024-10-29 22:15:14.541903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.541994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.169 [2024-10-29 22:15:14.542010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.542104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.169 [2024-10-29 22:15:14.542120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.169 #18 NEW cov: 12423 ft: 12838 corp: 3/178b lim: 90 exec/s: 0 rss: 74Mb L: 89/89 MS: 1 InsertByte- 00:07:55.169 [2024-10-29 22:15:14.612318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.169 [2024-10-29 22:15:14.612350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.612424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.169 [2024-10-29 22:15:14.612442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.612520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.169 [2024-10-29 22:15:14.612538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.612629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.169 [2024-10-29 22:15:14.612646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.169 #19 NEW cov: 12429 ft: 13080 corp: 4/266b lim: 90 exec/s: 0 rss: 74Mb L: 88/89 MS: 1 ChangeBit- 00:07:55.169 [2024-10-29 22:15:14.662291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.169 [2024-10-29 22:15:14.662322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.662396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.169 [2024-10-29 22:15:14.662415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.169 [2024-10-29 22:15:14.662493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.169 [2024-10-29 22:15:14.662510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.429 #20 NEW cov: 12514 ft: 13642 corp: 5/324b lim: 90 exec/s: 0 rss: 75Mb L: 58/89 MS: 1 EraseBytes- 00:07:55.429 [2024-10-29 22:15:14.732809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.429 [2024-10-29 22:15:14.732837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.732912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.429 [2024-10-29 22:15:14.732927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.733019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.429 [2024-10-29 22:15:14.733037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.429 #21 NEW cov: 12514 ft: 13762 corp: 6/394b lim: 90 exec/s: 0 rss: 75Mb L: 70/89 MS: 1 EraseBytes- 00:07:55.429 [2024-10-29 22:15:14.803637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.429 [2024-10-29 22:15:14.803665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.803744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.429 [2024-10-29 22:15:14.803762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.803852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.429 [2024-10-29 22:15:14.803872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.803958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.429 [2024-10-29 22:15:14.803974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.429 #22 NEW cov: 12514 ft: 13907 corp: 7/482b lim: 90 exec/s: 0 rss: 75Mb L: 88/89 MS: 1 ChangeASCIIInt- 00:07:55.429 [2024-10-29 22:15:14.854275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.429 [2024-10-29 22:15:14.854308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.854376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.429 [2024-10-29 22:15:14.854395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.854468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.429 [2024-10-29 22:15:14.854487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.854585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.429 [2024-10-29 22:15:14.854603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.429 #23 NEW cov: 12514 ft: 13947 corp: 8/556b lim: 90 exec/s: 0 rss: 75Mb L: 74/89 MS: 1 EraseBytes- 00:07:55.429 [2024-10-29 22:15:14.904592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.429 [2024-10-29 22:15:14.904621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.904703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.429 [2024-10-29 22:15:14.904722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.904801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.429 [2024-10-29 22:15:14.904818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.429 [2024-10-29 22:15:14.904911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.429 [2024-10-29 22:15:14.904928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.429 #24 NEW cov: 12514 ft: 14042 corp: 9/644b lim: 90 exec/s: 0 rss: 75Mb L: 88/89 MS: 1 ChangeBinInt- 00:07:55.688 [2024-10-29 22:15:14.974617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.688 [2024-10-29 22:15:14.974648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:14.974721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.688 [2024-10-29 22:15:14.974738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:14.974821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.688 [2024-10-29 22:15:14.974838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.688 #28 NEW cov: 12514 ft: 14063 corp: 10/715b lim: 90 exec/s: 0 rss: 75Mb L: 71/89 MS: 4 ShuffleBytes-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:55.688 [2024-10-29 22:15:15.025166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.688 [2024-10-29 22:15:15.025194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.025268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.688 [2024-10-29 22:15:15.025283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.025382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.688 [2024-10-29 22:15:15.025402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.688 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:55.688 #29 NEW cov: 12537 ft: 14123 corp: 11/786b lim: 90 exec/s: 0 rss: 75Mb L: 71/89 MS: 1 ChangeBit- 00:07:55.688 [2024-10-29 22:15:15.095940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.688 [2024-10-29 22:15:15.095970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.096050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.688 [2024-10-29 22:15:15.096068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.096163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.688 [2024-10-29 22:15:15.096181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.096274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.688 [2024-10-29 22:15:15.096291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.688 #30 NEW cov: 12537 ft: 14190 corp: 12/868b lim: 90 exec/s: 0 rss: 75Mb L: 82/89 MS: 1 CopyPart- 00:07:55.688 [2024-10-29 22:15:15.166359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.688 [2024-10-29 22:15:15.166387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.166470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.688 [2024-10-29 22:15:15.166487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.166572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.688 [2024-10-29 22:15:15.166590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.688 [2024-10-29 22:15:15.166683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.688 [2024-10-29 22:15:15.166701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.688 #31 NEW cov: 12537 ft: 14216 corp: 13/956b lim: 90 exec/s: 31 rss: 75Mb L: 88/89 MS: 1 ChangeBit- 00:07:55.947 [2024-10-29 22:15:15.236771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.947 [2024-10-29 22:15:15.236801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.947 [2024-10-29 22:15:15.236882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.947 [2024-10-29 22:15:15.236900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.947 [2024-10-29 22:15:15.236983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.947 [2024-10-29 22:15:15.237000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.947 [2024-10-29 22:15:15.237092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.947 [2024-10-29 22:15:15.237108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.947 #32 NEW cov: 12537 ft: 14283 corp: 14/1044b lim: 90 exec/s: 32 rss: 75Mb L: 88/89 MS: 1 ChangeBit- 00:07:55.948 [2024-10-29 22:15:15.306878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.948 [2024-10-29 22:15:15.306908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.306970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.948 [2024-10-29 22:15:15.306988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.307076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.948 [2024-10-29 22:15:15.307094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.948 #33 NEW cov: 12537 ft: 14308 corp: 15/1114b lim: 90 exec/s: 33 rss: 75Mb L: 70/89 MS: 1 EraseBytes- 00:07:55.948 [2024-10-29 22:15:15.377859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.948 [2024-10-29 22:15:15.377890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.377963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.948 [2024-10-29 22:15:15.377980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.378072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:55.948 [2024-10-29 22:15:15.378094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.378171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:55.948 [2024-10-29 22:15:15.378187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.948 #34 NEW cov: 12537 ft: 14326 corp: 16/1202b lim: 90 exec/s: 34 rss: 75Mb L: 88/89 MS: 1 CopyPart- 00:07:55.948 [2024-10-29 22:15:15.427644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:55.948 [2024-10-29 22:15:15.427673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.948 [2024-10-29 22:15:15.427768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:55.948 [2024-10-29 22:15:15.427789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.948 #35 NEW cov: 12537 ft: 14715 corp: 17/1254b lim: 90 exec/s: 35 rss: 75Mb L: 52/89 MS: 1 EraseBytes- 00:07:56.207 [2024-10-29 22:15:15.478478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.207 [2024-10-29 22:15:15.478511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.478581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.207 [2024-10-29 22:15:15.478599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.478661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.207 [2024-10-29 22:15:15.478682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.207 #36 NEW cov: 12537 ft: 14794 corp: 18/1312b lim: 90 exec/s: 36 rss: 75Mb L: 58/89 MS: 1 ChangeBinInt- 00:07:56.207 [2024-10-29 22:15:15.549463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.207 [2024-10-29 22:15:15.549498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.549581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.207 [2024-10-29 22:15:15.549601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.549663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.207 [2024-10-29 22:15:15.549684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.549785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.207 [2024-10-29 22:15:15.549808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.207 #37 NEW cov: 12537 ft: 14829 corp: 19/1400b lim: 90 exec/s: 37 rss: 75Mb L: 88/89 MS: 1 ChangeBit- 00:07:56.207 [2024-10-29 22:15:15.619545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.207 [2024-10-29 22:15:15.619576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.619653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.207 [2024-10-29 22:15:15.619669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.619760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.207 [2024-10-29 22:15:15.619780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.619874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.207 [2024-10-29 22:15:15.619893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.207 #43 NEW cov: 12537 ft: 14837 corp: 20/1489b lim: 90 exec/s: 43 rss: 75Mb L: 89/89 MS: 1 InsertByte- 00:07:56.207 [2024-10-29 22:15:15.670133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.207 [2024-10-29 22:15:15.670162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.670235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.207 [2024-10-29 22:15:15.670253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.670354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.207 [2024-10-29 22:15:15.670373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.670462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.207 [2024-10-29 22:15:15.670481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.207 #44 NEW cov: 12537 ft: 14861 corp: 21/1568b lim: 90 exec/s: 44 rss: 75Mb L: 79/89 MS: 1 EraseBytes- 00:07:56.207 [2024-10-29 22:15:15.719710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.207 [2024-10-29 22:15:15.719738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.207 [2024-10-29 22:15:15.719798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.207 [2024-10-29 22:15:15.719815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.467 #45 NEW cov: 12537 ft: 14905 corp: 22/1612b lim: 90 exec/s: 45 rss: 76Mb L: 44/89 MS: 1 EraseBytes- 00:07:56.467 [2024-10-29 22:15:15.790989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.467 [2024-10-29 22:15:15.791016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.791098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.467 [2024-10-29 22:15:15.791115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.791208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.467 [2024-10-29 22:15:15.791226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.791331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.467 [2024-10-29 22:15:15.791349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.467 #46 NEW cov: 12537 ft: 14920 corp: 23/1701b lim: 90 exec/s: 46 rss: 76Mb L: 89/89 MS: 1 CrossOver- 00:07:56.467 [2024-10-29 22:15:15.860853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.467 [2024-10-29 22:15:15.860880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.860959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.467 [2024-10-29 22:15:15.860978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.861067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.467 [2024-10-29 22:15:15.861084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.467 #47 NEW cov: 12537 ft: 14969 corp: 24/1772b lim: 90 exec/s: 47 rss: 76Mb L: 71/89 MS: 1 ChangeBinInt- 00:07:56.467 [2024-10-29 22:15:15.911756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.467 [2024-10-29 22:15:15.911785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.911870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.467 [2024-10-29 22:15:15.911887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.911977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.467 [2024-10-29 22:15:15.911995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.912089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.467 [2024-10-29 22:15:15.912107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.467 #48 NEW cov: 12537 ft: 14987 corp: 25/1860b lim: 90 exec/s: 48 rss: 76Mb L: 88/89 MS: 1 ChangeASCIIInt- 00:07:56.467 [2024-10-29 22:15:15.961562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.467 [2024-10-29 22:15:15.961593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.961659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.467 [2024-10-29 22:15:15.961678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.467 [2024-10-29 22:15:15.961757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.467 [2024-10-29 22:15:15.961774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.727 #49 NEW cov: 12537 ft: 15010 corp: 26/1930b lim: 90 exec/s: 49 rss: 76Mb L: 70/89 MS: 1 CopyPart- 00:07:56.727 [2024-10-29 22:15:16.032149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.727 [2024-10-29 22:15:16.032177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.032259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.727 [2024-10-29 22:15:16.032279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.032364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.727 [2024-10-29 22:15:16.032378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.032477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:56.727 [2024-10-29 22:15:16.032493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.727 #50 NEW cov: 12537 ft: 15013 corp: 27/2018b lim: 90 exec/s: 50 rss: 76Mb L: 88/89 MS: 1 ChangeByte- 00:07:56.727 [2024-10-29 22:15:16.082303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.727 [2024-10-29 22:15:16.082332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.082397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.727 [2024-10-29 22:15:16.082416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.082498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:56.727 [2024-10-29 22:15:16.082517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.727 #51 NEW cov: 12537 ft: 15041 corp: 28/2089b lim: 90 exec/s: 51 rss: 76Mb L: 71/89 MS: 1 InsertByte- 00:07:56.727 [2024-10-29 22:15:16.152272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:56.727 [2024-10-29 22:15:16.152304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.727 [2024-10-29 22:15:16.152387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:56.727 [2024-10-29 22:15:16.152405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.727 #52 NEW cov: 12537 ft: 15076 corp: 29/2139b lim: 90 exec/s: 26 rss: 76Mb L: 50/89 MS: 1 EraseBytes- 00:07:56.727 #52 DONE cov: 12537 ft: 15076 corp: 29/2139b lim: 90 exec/s: 26 rss: 76Mb 00:07:56.727 Done 52 runs in 2 second(s) 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:56.987 22:15:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:56.987 [2024-10-29 22:15:16.329927] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:56.987 [2024-10-29 22:15:16.330010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107738 ] 00:07:57.245 [2024-10-29 22:15:16.529496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.245 [2024-10-29 22:15:16.568001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.245 [2024-10-29 22:15:16.627368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.245 [2024-10-29 22:15:16.643537] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:57.245 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.245 INFO: Seed: 3640055062 00:07:57.245 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:57.245 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:57.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:57.245 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.246 #2 INITED exec/s: 0 rss: 66Mb 00:07:57.246 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.246 This may also happen if the target rejected all inputs we tried so far 00:07:57.246 [2024-10-29 22:15:16.702811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.246 [2024-10-29 22:15:16.702843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 NEW_FUNC[1/717]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:57.813 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:57.813 #7 NEW cov: 12285 ft: 12269 corp: 2/20b lim: 50 exec/s: 0 rss: 74Mb L: 19/19 MS: 5 ShuffleBytes-CopyPart-CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:57.813 [2024-10-29 22:15:17.054839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.813 [2024-10-29 22:15:17.054887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 #8 NEW cov: 12398 ft: 12819 corp: 3/39b lim: 50 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ShuffleBytes- 00:07:57.813 [2024-10-29 22:15:17.125329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.813 [2024-10-29 22:15:17.125356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 #9 NEW cov: 12404 ft: 13043 corp: 4/58b lim: 50 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeBinInt- 00:07:57.813 [2024-10-29 22:15:17.195459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.813 [2024-10-29 22:15:17.195487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 #10 NEW cov: 12489 ft: 13352 corp: 5/77b lim: 50 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeByte- 00:07:57.813 [2024-10-29 22:15:17.245623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.813 [2024-10-29 22:15:17.245651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 #11 NEW cov: 12489 ft: 13488 corp: 6/96b lim: 50 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeBit- 00:07:57.813 [2024-10-29 22:15:17.316583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:57.813 [2024-10-29 22:15:17.316611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.813 [2024-10-29 22:15:17.316673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:57.813 [2024-10-29 22:15:17.316690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.813 [2024-10-29 22:15:17.316749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:57.813 [2024-10-29 22:15:17.316765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.072 #12 NEW cov: 12489 ft: 14361 corp: 7/131b lim: 50 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:58.072 [2024-10-29 22:15:17.386512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.072 [2024-10-29 22:15:17.386540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.072 [2024-10-29 22:15:17.386642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.072 [2024-10-29 22:15:17.386657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.072 #13 NEW cov: 12489 ft: 14839 corp: 8/151b lim: 50 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 CrossOver- 00:07:58.072 [2024-10-29 22:15:17.446421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.072 [2024-10-29 22:15:17.446449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.072 #14 NEW cov: 12489 ft: 14916 corp: 9/170b lim: 50 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 ChangeBit- 00:07:58.072 [2024-10-29 22:15:17.496696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.072 [2024-10-29 22:15:17.496724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.072 #15 NEW cov: 12489 ft: 14941 corp: 10/189b lim: 50 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 ChangeBit- 00:07:58.072 [2024-10-29 22:15:17.567154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.072 [2024-10-29 22:15:17.567184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.072 [2024-10-29 22:15:17.567246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.072 [2024-10-29 22:15:17.567268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.331 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:58.331 #16 NEW cov: 12512 ft: 15012 corp: 11/209b lim: 50 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 ChangeBit- 00:07:58.331 [2024-10-29 22:15:17.637720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.331 [2024-10-29 22:15:17.637747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.637814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.331 [2024-10-29 22:15:17.637831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.637909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:58.331 [2024-10-29 22:15:17.637925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.331 #17 NEW cov: 12512 ft: 15102 corp: 12/245b lim: 50 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 InsertByte- 00:07:58.331 [2024-10-29 22:15:17.707717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.331 [2024-10-29 22:15:17.707744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.707816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.331 [2024-10-29 22:15:17.707834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.331 #18 NEW cov: 12512 ft: 15122 corp: 13/265b lim: 50 exec/s: 18 rss: 74Mb L: 20/36 MS: 1 ChangeBinInt- 00:07:58.331 [2024-10-29 22:15:17.757895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.331 [2024-10-29 22:15:17.757924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.757985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.331 [2024-10-29 22:15:17.758002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.331 #19 NEW cov: 12512 ft: 15130 corp: 14/292b lim: 50 exec/s: 19 rss: 74Mb L: 27/36 MS: 1 CMP- DE: "\366PW\002\000\000\000\000"- 00:07:58.331 [2024-10-29 22:15:17.808437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.331 [2024-10-29 22:15:17.808465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.808527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.331 [2024-10-29 22:15:17.808546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.331 [2024-10-29 22:15:17.808623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:58.331 [2024-10-29 22:15:17.808645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.331 #20 NEW cov: 12512 ft: 15134 corp: 15/331b lim: 50 exec/s: 20 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:07:58.590 [2024-10-29 22:15:17.879170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.590 [2024-10-29 22:15:17.879196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.590 [2024-10-29 22:15:17.879270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.590 [2024-10-29 22:15:17.879286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.590 [2024-10-29 22:15:17.879370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:58.590 [2024-10-29 22:15:17.879388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.590 [2024-10-29 22:15:17.879480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:58.590 [2024-10-29 22:15:17.879509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.590 #21 NEW cov: 12512 ft: 15440 corp: 16/371b lim: 50 exec/s: 21 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:07:58.590 [2024-10-29 22:15:17.948388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.590 [2024-10-29 22:15:17.948414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.590 #22 NEW cov: 12512 ft: 15463 corp: 17/390b lim: 50 exec/s: 22 rss: 75Mb L: 19/40 MS: 1 ChangeBit- 00:07:58.590 [2024-10-29 22:15:17.999099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.590 [2024-10-29 22:15:17.999129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.590 [2024-10-29 22:15:17.999228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.590 [2024-10-29 22:15:17.999244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.590 #23 NEW cov: 12512 ft: 15478 corp: 18/410b lim: 50 exec/s: 23 rss: 75Mb L: 20/40 MS: 1 InsertByte- 00:07:58.590 [2024-10-29 22:15:18.048950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.590 [2024-10-29 22:15:18.048978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.590 #24 NEW cov: 12512 ft: 15541 corp: 19/423b lim: 50 exec/s: 24 rss: 75Mb L: 13/40 MS: 1 EraseBytes- 00:07:58.849 [2024-10-29 22:15:18.119608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.849 [2024-10-29 22:15:18.119637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.849 [2024-10-29 22:15:18.119690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.849 [2024-10-29 22:15:18.119710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.849 #25 NEW cov: 12512 ft: 15552 corp: 20/443b lim: 50 exec/s: 25 rss: 75Mb L: 20/40 MS: 1 CMP- DE: "\003\000\000\000"- 00:07:58.849 [2024-10-29 22:15:18.190218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.849 [2024-10-29 22:15:18.190247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.849 [2024-10-29 22:15:18.190351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.849 [2024-10-29 22:15:18.190372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.849 #29 NEW cov: 12512 ft: 15597 corp: 21/471b lim: 50 exec/s: 29 rss: 75Mb L: 28/40 MS: 4 ShuffleBytes-CMP-CrossOver-CrossOver- DE: "\000\000\000\366"- 00:07:58.849 [2024-10-29 22:15:18.250031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.849 [2024-10-29 22:15:18.250060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.849 #30 NEW cov: 12512 ft: 15604 corp: 22/490b lim: 50 exec/s: 30 rss: 75Mb L: 19/40 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:07:58.849 [2024-10-29 22:15:18.300903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.849 [2024-10-29 22:15:18.300932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.849 [2024-10-29 22:15:18.300997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.849 [2024-10-29 22:15:18.301015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.849 [2024-10-29 22:15:18.301097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:58.849 [2024-10-29 22:15:18.301115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.849 #31 NEW cov: 12512 ft: 15637 corp: 23/525b lim: 50 exec/s: 31 rss: 75Mb L: 35/40 MS: 1 ChangeByte- 00:07:58.849 [2024-10-29 22:15:18.350735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:58.849 [2024-10-29 22:15:18.350762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.849 [2024-10-29 22:15:18.350821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:58.849 [2024-10-29 22:15:18.350839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.108 #32 NEW cov: 12512 ft: 15650 corp: 24/553b lim: 50 exec/s: 32 rss: 75Mb L: 28/40 MS: 1 CrossOver- 00:07:59.108 [2024-10-29 22:15:18.422054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:59.108 [2024-10-29 22:15:18.422080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.422144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:59.108 [2024-10-29 22:15:18.422159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.422245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:59.108 [2024-10-29 22:15:18.422262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.422357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:59.108 [2024-10-29 22:15:18.422373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.108 #33 NEW cov: 12512 ft: 15658 corp: 25/600b lim: 50 exec/s: 33 rss: 75Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:59.108 [2024-10-29 22:15:18.471360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:59.108 [2024-10-29 22:15:18.471386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.108 #34 NEW cov: 12512 ft: 15668 corp: 26/619b lim: 50 exec/s: 34 rss: 75Mb L: 19/47 MS: 1 ShuffleBytes- 00:07:59.108 [2024-10-29 22:15:18.522545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:59.108 [2024-10-29 22:15:18.522574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.522643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:59.108 [2024-10-29 22:15:18.522660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.522747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:59.108 [2024-10-29 22:15:18.522769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.522859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:59.108 [2024-10-29 22:15:18.522875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.108 #35 NEW cov: 12512 ft: 15727 corp: 27/659b lim: 50 exec/s: 35 rss: 75Mb L: 40/47 MS: 1 CrossOver- 00:07:59.108 [2024-10-29 22:15:18.592941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:59.108 [2024-10-29 22:15:18.592969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.593057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:59.108 [2024-10-29 22:15:18.593072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.593155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:59.108 [2024-10-29 22:15:18.593172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.108 [2024-10-29 22:15:18.593265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:59.108 [2024-10-29 22:15:18.593280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.367 #36 NEW cov: 12512 ft: 15739 corp: 28/700b lim: 50 exec/s: 36 rss: 75Mb L: 41/47 MS: 1 InsertByte- 00:07:59.367 [2024-10-29 22:15:18.662858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:59.367 [2024-10-29 22:15:18.662887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.367 [2024-10-29 22:15:18.662962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:59.367 [2024-10-29 22:15:18.662979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.367 [2024-10-29 22:15:18.663062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:59.367 [2024-10-29 22:15:18.663079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.367 #37 NEW cov: 12512 ft: 15752 corp: 29/731b lim: 50 exec/s: 18 rss: 75Mb L: 31/47 MS: 1 CrossOver- 00:07:59.367 #37 DONE cov: 12512 ft: 15752 corp: 29/731b lim: 50 exec/s: 18 rss: 75Mb 00:07:59.367 ###### Recommended dictionary. ###### 00:07:59.367 "\366PW\002\000\000\000\000" # Uses: 0 00:07:59.367 "\003\000\000\000" # Uses: 1 00:07:59.367 "\000\000\000\366" # Uses: 0 00:07:59.367 ###### End of recommended dictionary. ###### 00:07:59.367 Done 37 runs in 2 second(s) 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.367 22:15:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:59.367 [2024-10-29 22:15:18.857017] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:07:59.367 [2024-10-29 22:15:18.857084] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108094 ] 00:07:59.626 [2024-10-29 22:15:19.054358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.626 [2024-10-29 22:15:19.092931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.885 [2024-10-29 22:15:19.152086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.885 [2024-10-29 22:15:19.168211] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:59.885 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.885 INFO: Seed: 1870105438 00:07:59.885 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:07:59.885 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:07:59.885 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:59.885 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.885 #2 INITED exec/s: 0 rss: 66Mb 00:07:59.885 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.885 This may also happen if the target rejected all inputs we tried so far 00:07:59.885 [2024-10-29 22:15:19.223767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:59.885 [2024-10-29 22:15:19.223800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.885 [2024-10-29 22:15:19.223857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:59.885 [2024-10-29 22:15:19.223874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.144 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:00.144 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.144 #3 NEW cov: 12311 ft: 12310 corp: 2/42b lim: 85 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:00.144 [2024-10-29 22:15:19.564859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.144 [2024-10-29 22:15:19.564922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.144 #6 NEW cov: 12424 ft: 13680 corp: 3/69b lim: 85 exec/s: 0 rss: 74Mb L: 27/41 MS: 3 CrossOver-CMP-CrossOver- DE: "\000\007"- 00:08:00.144 [2024-10-29 22:15:19.614742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.144 [2024-10-29 22:15:19.614771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.144 #7 NEW cov: 12430 ft: 13901 corp: 4/97b lim: 85 exec/s: 0 rss: 74Mb L: 28/41 MS: 1 CrossOver- 00:08:00.403 [2024-10-29 22:15:19.674906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.674937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.403 #9 NEW cov: 12515 ft: 14227 corp: 5/125b lim: 85 exec/s: 0 rss: 74Mb L: 28/41 MS: 2 ShuffleBytes-CrossOver- 00:08:00.403 [2024-10-29 22:15:19.715004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.715032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.403 #10 NEW cov: 12515 ft: 14294 corp: 6/152b lim: 85 exec/s: 0 rss: 74Mb L: 27/41 MS: 1 CopyPart- 00:08:00.403 [2024-10-29 22:15:19.755506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.755535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.403 [2024-10-29 22:15:19.755577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.403 [2024-10-29 22:15:19.755594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.403 [2024-10-29 22:15:19.755653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:00.403 [2024-10-29 22:15:19.755670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.403 #11 NEW cov: 12515 ft: 14856 corp: 7/208b lim: 85 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 InsertRepeatedBytes- 00:08:00.403 [2024-10-29 22:15:19.815293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.815330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.403 #12 NEW cov: 12515 ft: 14930 corp: 8/236b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 ShuffleBytes- 00:08:00.403 [2024-10-29 22:15:19.875465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.875495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.403 #13 NEW cov: 12515 ft: 14973 corp: 9/264b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 PersAutoDict- DE: "\000\007"- 00:08:00.403 [2024-10-29 22:15:19.915553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.403 [2024-10-29 22:15:19.915581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.662 #14 NEW cov: 12515 ft: 15105 corp: 10/292b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 PersAutoDict- DE: "\000\007"- 00:08:00.662 [2024-10-29 22:15:19.955652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.662 [2024-10-29 22:15:19.955681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.662 #15 NEW cov: 12515 ft: 15163 corp: 11/320b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 ChangeBit- 00:08:00.662 [2024-10-29 22:15:20.015874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.662 [2024-10-29 22:15:20.015912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.662 #16 NEW cov: 12515 ft: 15218 corp: 12/348b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 ChangeBit- 00:08:00.662 [2024-10-29 22:15:20.076188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.662 [2024-10-29 22:15:20.076226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.662 [2024-10-29 22:15:20.076287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.662 [2024-10-29 22:15:20.076309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.662 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:00.662 #17 NEW cov: 12538 ft: 15234 corp: 13/389b lim: 85 exec/s: 0 rss: 74Mb L: 41/56 MS: 1 ChangeBinInt- 00:08:00.662 [2024-10-29 22:15:20.116100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.662 [2024-10-29 22:15:20.116130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.662 #18 NEW cov: 12538 ft: 15268 corp: 14/410b lim: 85 exec/s: 0 rss: 74Mb L: 21/56 MS: 1 EraseBytes- 00:08:00.662 [2024-10-29 22:15:20.176281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.663 [2024-10-29 22:15:20.176317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 #19 NEW cov: 12538 ft: 15304 corp: 15/438b lim: 85 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 ShuffleBytes- 00:08:00.922 [2024-10-29 22:15:20.216571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.216600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.216656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.922 [2024-10-29 22:15:20.216674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.922 #20 NEW cov: 12538 ft: 15410 corp: 16/479b lim: 85 exec/s: 20 rss: 74Mb L: 41/56 MS: 1 ShuffleBytes- 00:08:00.922 [2024-10-29 22:15:20.256851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.256879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.256919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.922 [2024-10-29 22:15:20.256936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.256994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:00.922 [2024-10-29 22:15:20.257012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.922 #21 NEW cov: 12538 ft: 15443 corp: 17/542b lim: 85 exec/s: 21 rss: 74Mb L: 63/63 MS: 1 InsertRepeatedBytes- 00:08:00.922 [2024-10-29 22:15:20.296764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.296793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.296850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.922 [2024-10-29 22:15:20.296867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.922 #22 NEW cov: 12538 ft: 15479 corp: 18/578b lim: 85 exec/s: 22 rss: 74Mb L: 36/63 MS: 1 CMP- DE: "\366\261;\262O\3455\000"- 00:08:00.922 [2024-10-29 22:15:20.337051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.337080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.337129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:00.922 [2024-10-29 22:15:20.337145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.922 [2024-10-29 22:15:20.337204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:00.922 [2024-10-29 22:15:20.337221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.922 #23 NEW cov: 12538 ft: 15523 corp: 19/641b lim: 85 exec/s: 23 rss: 75Mb L: 63/63 MS: 1 ChangeByte- 00:08:00.922 [2024-10-29 22:15:20.396968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.396997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.922 #24 NEW cov: 12538 ft: 15540 corp: 20/670b lim: 85 exec/s: 24 rss: 75Mb L: 29/63 MS: 1 InsertByte- 00:08:00.922 [2024-10-29 22:15:20.437021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:00.922 [2024-10-29 22:15:20.437048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.181 #25 NEW cov: 12538 ft: 15603 corp: 21/698b lim: 85 exec/s: 25 rss: 75Mb L: 28/63 MS: 1 ChangeByte- 00:08:01.181 [2024-10-29 22:15:20.497192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.181 [2024-10-29 22:15:20.497221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.181 #26 NEW cov: 12538 ft: 15624 corp: 22/725b lim: 85 exec/s: 26 rss: 75Mb L: 27/63 MS: 1 ChangeBinInt- 00:08:01.181 [2024-10-29 22:15:20.557511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.181 [2024-10-29 22:15:20.557539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.181 [2024-10-29 22:15:20.557577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.181 [2024-10-29 22:15:20.557593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.181 #27 NEW cov: 12538 ft: 15676 corp: 23/775b lim: 85 exec/s: 27 rss: 75Mb L: 50/63 MS: 1 InsertRepeatedBytes- 00:08:01.181 [2024-10-29 22:15:20.618000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.181 [2024-10-29 22:15:20.618029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.181 [2024-10-29 22:15:20.618073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.181 [2024-10-29 22:15:20.618090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.181 [2024-10-29 22:15:20.618145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:01.181 [2024-10-29 22:15:20.618161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.181 [2024-10-29 22:15:20.618220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:01.181 [2024-10-29 22:15:20.618236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.181 #28 NEW cov: 12538 ft: 16041 corp: 24/859b lim: 85 exec/s: 28 rss: 75Mb L: 84/84 MS: 1 CrossOver- 00:08:01.181 [2024-10-29 22:15:20.657654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.182 [2024-10-29 22:15:20.657684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.182 #29 NEW cov: 12538 ft: 16059 corp: 25/880b lim: 85 exec/s: 29 rss: 75Mb L: 21/84 MS: 1 ChangeBinInt- 00:08:01.441 [2024-10-29 22:15:20.718289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.441 [2024-10-29 22:15:20.718325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.718375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.441 [2024-10-29 22:15:20.718392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.718450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:01.441 [2024-10-29 22:15:20.718467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.718524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:01.441 [2024-10-29 22:15:20.718540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.441 #30 NEW cov: 12538 ft: 16075 corp: 26/964b lim: 85 exec/s: 30 rss: 75Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:01.441 [2024-10-29 22:15:20.757901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.441 [2024-10-29 22:15:20.757930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.441 #31 NEW cov: 12538 ft: 16085 corp: 27/992b lim: 85 exec/s: 31 rss: 75Mb L: 28/84 MS: 1 ChangeBinInt- 00:08:01.441 [2024-10-29 22:15:20.798048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.441 [2024-10-29 22:15:20.798078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.441 #32 NEW cov: 12538 ft: 16098 corp: 28/1019b lim: 85 exec/s: 32 rss: 75Mb L: 27/84 MS: 1 EraseBytes- 00:08:01.441 [2024-10-29 22:15:20.858377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.441 [2024-10-29 22:15:20.858407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.858451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.441 [2024-10-29 22:15:20.858468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.441 #33 NEW cov: 12538 ft: 16109 corp: 29/1055b lim: 85 exec/s: 33 rss: 75Mb L: 36/84 MS: 1 CMP- DE: "\0015\345O\376\242!\220"- 00:08:01.441 [2024-10-29 22:15:20.918880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.441 [2024-10-29 22:15:20.918910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.918960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.441 [2024-10-29 22:15:20.918977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.919033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:01.441 [2024-10-29 22:15:20.919050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.441 [2024-10-29 22:15:20.919110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:01.442 [2024-10-29 22:15:20.919126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.442 #34 NEW cov: 12538 ft: 16124 corp: 30/1136b lim: 85 exec/s: 34 rss: 75Mb L: 81/84 MS: 1 InsertRepeatedBytes- 00:08:01.442 [2024-10-29 22:15:20.958641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.442 [2024-10-29 22:15:20.958670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.442 [2024-10-29 22:15:20.958728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.442 [2024-10-29 22:15:20.958745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.702 #35 NEW cov: 12538 ft: 16194 corp: 31/1177b lim: 85 exec/s: 35 rss: 75Mb L: 41/84 MS: 1 ShuffleBytes- 00:08:01.702 [2024-10-29 22:15:21.019202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.702 [2024-10-29 22:15:21.019231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.019280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.702 [2024-10-29 22:15:21.019296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.019358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:01.702 [2024-10-29 22:15:21.019375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.019433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:01.702 [2024-10-29 22:15:21.019448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.702 #36 NEW cov: 12538 ft: 16208 corp: 32/1252b lim: 85 exec/s: 36 rss: 75Mb L: 75/84 MS: 1 InsertRepeatedBytes- 00:08:01.702 [2024-10-29 22:15:21.058777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.702 [2024-10-29 22:15:21.058805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.702 #37 NEW cov: 12538 ft: 16230 corp: 33/1280b lim: 85 exec/s: 37 rss: 75Mb L: 28/84 MS: 1 CopyPart- 00:08:01.702 [2024-10-29 22:15:21.099415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.702 [2024-10-29 22:15:21.099443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.099495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:01.702 [2024-10-29 22:15:21.099511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.099568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:01.702 [2024-10-29 22:15:21.099585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.702 [2024-10-29 22:15:21.099642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:01.702 [2024-10-29 22:15:21.099659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.702 #38 NEW cov: 12538 ft: 16269 corp: 34/1360b lim: 85 exec/s: 38 rss: 75Mb L: 80/84 MS: 1 CrossOver- 00:08:01.702 [2024-10-29 22:15:21.159025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.702 [2024-10-29 22:15:21.159054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.702 #39 NEW cov: 12538 ft: 16274 corp: 35/1386b lim: 85 exec/s: 39 rss: 75Mb L: 26/84 MS: 1 EraseBytes- 00:08:01.702 [2024-10-29 22:15:21.199135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:01.702 [2024-10-29 22:15:21.199163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.961 #45 NEW cov: 12538 ft: 16291 corp: 36/1415b lim: 85 exec/s: 22 rss: 75Mb L: 29/84 MS: 1 CopyPart- 00:08:01.961 #45 DONE cov: 12538 ft: 16291 corp: 36/1415b lim: 85 exec/s: 22 rss: 75Mb 00:08:01.961 ###### Recommended dictionary. ###### 00:08:01.961 "\000\007" # Uses: 3 00:08:01.961 "\366\261;\262O\3455\000" # Uses: 0 00:08:01.961 "\0015\345O\376\242!\220" # Uses: 0 00:08:01.961 ###### End of recommended dictionary. ###### 00:08:01.961 Done 45 runs in 2 second(s) 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.961 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.962 22:15:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:01.962 [2024-10-29 22:15:21.377733] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:01.962 [2024-10-29 22:15:21.377801] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108448 ] 00:08:02.221 [2024-10-29 22:15:21.582437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.221 [2024-10-29 22:15:21.620840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.221 [2024-10-29 22:15:21.679935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.221 [2024-10-29 22:15:21.696111] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:02.221 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.221 INFO: Seed: 102115635 00:08:02.221 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:08:02.221 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:08:02.221 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:02.221 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.221 #2 INITED exec/s: 0 rss: 66Mb 00:08:02.221 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.221 This may also happen if the target rejected all inputs we tried so far 00:08:02.479 [2024-10-29 22:15:21.753828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.479 [2024-10-29 22:15:21.753861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.738 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:02.738 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.738 #6 NEW cov: 12244 ft: 12243 corp: 2/9b lim: 25 exec/s: 0 rss: 74Mb L: 8/8 MS: 4 InsertByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:02.738 [2024-10-29 22:15:22.094797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.738 [2024-10-29 22:15:22.094858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.738 #7 NEW cov: 12357 ft: 12935 corp: 3/17b lim: 25 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:02.738 [2024-10-29 22:15:22.164790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.738 [2024-10-29 22:15:22.164821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.738 #18 NEW cov: 12363 ft: 13215 corp: 4/26b lim: 25 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:02.738 [2024-10-29 22:15:22.204992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.738 [2024-10-29 22:15:22.205021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.738 [2024-10-29 22:15:22.205064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:02.738 [2024-10-29 22:15:22.205082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.738 #19 NEW cov: 12448 ft: 13831 corp: 5/38b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 CopyPart- 00:08:02.738 [2024-10-29 22:15:22.244966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.738 [2024-10-29 22:15:22.244996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 #23 NEW cov: 12448 ft: 13883 corp: 6/43b lim: 25 exec/s: 0 rss: 74Mb L: 5/12 MS: 4 ChangeByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:02.997 [2024-10-29 22:15:22.285078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.997 [2024-10-29 22:15:22.285107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 #24 NEW cov: 12448 ft: 13945 corp: 7/52b lim: 25 exec/s: 0 rss: 74Mb L: 9/12 MS: 1 CrossOver- 00:08:02.997 [2024-10-29 22:15:22.345246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.997 [2024-10-29 22:15:22.345273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 #25 NEW cov: 12448 ft: 14153 corp: 8/57b lim: 25 exec/s: 0 rss: 74Mb L: 5/12 MS: 1 ChangeBit- 00:08:02.997 [2024-10-29 22:15:22.405432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.997 [2024-10-29 22:15:22.405460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 #28 NEW cov: 12448 ft: 14214 corp: 9/65b lim: 25 exec/s: 0 rss: 74Mb L: 8/12 MS: 3 EraseBytes-ChangeBit-CopyPart- 00:08:02.997 [2024-10-29 22:15:22.465614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.997 [2024-10-29 22:15:22.465642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 #29 NEW cov: 12448 ft: 14281 corp: 10/74b lim: 25 exec/s: 0 rss: 74Mb L: 9/12 MS: 1 InsertByte- 00:08:02.997 [2024-10-29 22:15:22.505845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:02.997 [2024-10-29 22:15:22.505874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.997 [2024-10-29 22:15:22.505916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:02.997 [2024-10-29 22:15:22.505932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.257 #30 NEW cov: 12448 ft: 14303 corp: 11/86b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 ChangeBit- 00:08:03.257 [2024-10-29 22:15:22.565898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.257 [2024-10-29 22:15:22.565927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.257 #31 NEW cov: 12448 ft: 14334 corp: 12/91b lim: 25 exec/s: 0 rss: 74Mb L: 5/12 MS: 1 ChangeBit- 00:08:03.257 [2024-10-29 22:15:22.605983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.257 [2024-10-29 22:15:22.606011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.257 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:03.257 #32 NEW cov: 12471 ft: 14378 corp: 13/100b lim: 25 exec/s: 0 rss: 74Mb L: 9/12 MS: 1 ChangeBinInt- 00:08:03.257 [2024-10-29 22:15:22.666584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.257 [2024-10-29 22:15:22.666613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.257 [2024-10-29 22:15:22.666667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:03.257 [2024-10-29 22:15:22.666683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.257 [2024-10-29 22:15:22.666739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:03.257 [2024-10-29 22:15:22.666759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.257 [2024-10-29 22:15:22.666818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:03.257 [2024-10-29 22:15:22.666835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.257 #33 NEW cov: 12471 ft: 14895 corp: 14/121b lim: 25 exec/s: 0 rss: 75Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:08:03.257 [2024-10-29 22:15:22.726514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.257 [2024-10-29 22:15:22.726542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.257 [2024-10-29 22:15:22.726590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:03.257 [2024-10-29 22:15:22.726607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.257 #34 NEW cov: 12471 ft: 14925 corp: 15/133b lim: 25 exec/s: 34 rss: 75Mb L: 12/21 MS: 1 InsertRepeatedBytes- 00:08:03.257 [2024-10-29 22:15:22.766596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.257 [2024-10-29 22:15:22.766625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.257 [2024-10-29 22:15:22.766669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:03.257 [2024-10-29 22:15:22.766685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.516 #35 NEW cov: 12471 ft: 14940 corp: 16/145b lim: 25 exec/s: 35 rss: 75Mb L: 12/21 MS: 1 ShuffleBytes- 00:08:03.516 [2024-10-29 22:15:22.806947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.516 [2024-10-29 22:15:22.806976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.516 [2024-10-29 22:15:22.807034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:03.516 [2024-10-29 22:15:22.807050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.516 [2024-10-29 22:15:22.807108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:03.516 [2024-10-29 22:15:22.807125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.516 [2024-10-29 22:15:22.807183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:03.516 [2024-10-29 22:15:22.807200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.516 #36 NEW cov: 12471 ft: 14987 corp: 17/165b lim: 25 exec/s: 36 rss: 75Mb L: 20/21 MS: 1 InsertRepeatedBytes- 00:08:03.516 [2024-10-29 22:15:22.866765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.516 [2024-10-29 22:15:22.866794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.516 #37 NEW cov: 12471 ft: 15013 corp: 18/170b lim: 25 exec/s: 37 rss: 75Mb L: 5/21 MS: 1 ChangeByte- 00:08:03.516 [2024-10-29 22:15:22.906909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.516 [2024-10-29 22:15:22.906938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.516 #38 NEW cov: 12471 ft: 15034 corp: 19/179b lim: 25 exec/s: 38 rss: 75Mb L: 9/21 MS: 1 ShuffleBytes- 00:08:03.516 [2024-10-29 22:15:22.967065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.516 [2024-10-29 22:15:22.967094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.516 #39 NEW cov: 12471 ft: 15044 corp: 20/185b lim: 25 exec/s: 39 rss: 75Mb L: 6/21 MS: 1 InsertByte- 00:08:03.516 [2024-10-29 22:15:23.007325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.516 [2024-10-29 22:15:23.007354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.516 [2024-10-29 22:15:23.007394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:03.516 [2024-10-29 22:15:23.007411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.777 #40 NEW cov: 12471 ft: 15063 corp: 21/197b lim: 25 exec/s: 40 rss: 75Mb L: 12/21 MS: 1 ShuffleBytes- 00:08:03.777 [2024-10-29 22:15:23.067376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.067405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.777 #41 NEW cov: 12471 ft: 15078 corp: 22/202b lim: 25 exec/s: 41 rss: 75Mb L: 5/21 MS: 1 ShuffleBytes- 00:08:03.777 [2024-10-29 22:15:23.107455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.107483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.777 #42 NEW cov: 12471 ft: 15085 corp: 23/210b lim: 25 exec/s: 42 rss: 75Mb L: 8/21 MS: 1 ChangeBinInt- 00:08:03.777 [2024-10-29 22:15:23.147618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.147646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.777 #43 NEW cov: 12471 ft: 15131 corp: 24/215b lim: 25 exec/s: 43 rss: 75Mb L: 5/21 MS: 1 ShuffleBytes- 00:08:03.777 [2024-10-29 22:15:23.187751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.187779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.777 #44 NEW cov: 12471 ft: 15144 corp: 25/220b lim: 25 exec/s: 44 rss: 75Mb L: 5/21 MS: 1 ChangeBit- 00:08:03.777 [2024-10-29 22:15:23.227816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.227843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.777 #45 NEW cov: 12471 ft: 15155 corp: 26/226b lim: 25 exec/s: 45 rss: 75Mb L: 6/21 MS: 1 ShuffleBytes- 00:08:03.777 [2024-10-29 22:15:23.287967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:03.777 [2024-10-29 22:15:23.287995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 #46 NEW cov: 12471 ft: 15274 corp: 27/233b lim: 25 exec/s: 46 rss: 75Mb L: 7/21 MS: 1 EraseBytes- 00:08:04.036 [2024-10-29 22:15:23.328066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.036 [2024-10-29 22:15:23.328095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 #47 NEW cov: 12471 ft: 15297 corp: 28/239b lim: 25 exec/s: 47 rss: 75Mb L: 6/21 MS: 1 ChangeByte- 00:08:04.036 [2024-10-29 22:15:23.388242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.036 [2024-10-29 22:15:23.388271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 #48 NEW cov: 12471 ft: 15306 corp: 29/248b lim: 25 exec/s: 48 rss: 75Mb L: 9/21 MS: 1 ChangeByte- 00:08:04.036 [2024-10-29 22:15:23.428355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.036 [2024-10-29 22:15:23.428384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 #49 NEW cov: 12471 ft: 15322 corp: 30/257b lim: 25 exec/s: 49 rss: 75Mb L: 9/21 MS: 1 ShuffleBytes- 00:08:04.036 [2024-10-29 22:15:23.468478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.036 [2024-10-29 22:15:23.468507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 #50 NEW cov: 12471 ft: 15332 corp: 31/264b lim: 25 exec/s: 50 rss: 75Mb L: 7/21 MS: 1 ChangeByte- 00:08:04.036 [2024-10-29 22:15:23.528838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.036 [2024-10-29 22:15:23.528869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.036 [2024-10-29 22:15:23.528909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:04.036 [2024-10-29 22:15:23.528924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.036 #51 NEW cov: 12471 ft: 15340 corp: 32/277b lim: 25 exec/s: 51 rss: 75Mb L: 13/21 MS: 1 InsertRepeatedBytes- 00:08:04.295 [2024-10-29 22:15:23.568758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-10-29 22:15:23.568788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 #57 NEW cov: 12471 ft: 15341 corp: 33/285b lim: 25 exec/s: 57 rss: 75Mb L: 8/21 MS: 1 ChangeBinInt- 00:08:04.295 [2024-10-29 22:15:23.628902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-10-29 22:15:23.628930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 #58 NEW cov: 12471 ft: 15359 corp: 34/290b lim: 25 exec/s: 58 rss: 75Mb L: 5/21 MS: 1 CrossOver- 00:08:04.295 [2024-10-29 22:15:23.689087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-10-29 22:15:23.689116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 #59 NEW cov: 12471 ft: 15364 corp: 35/299b lim: 25 exec/s: 59 rss: 75Mb L: 9/21 MS: 1 InsertByte- 00:08:04.295 [2024-10-29 22:15:23.749327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-10-29 22:15:23.749356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 #60 NEW cov: 12471 ft: 15371 corp: 36/306b lim: 25 exec/s: 30 rss: 75Mb L: 7/21 MS: 1 CrossOver- 00:08:04.295 #60 DONE cov: 12471 ft: 15371 corp: 36/306b lim: 25 exec/s: 30 rss: 75Mb 00:08:04.295 Done 60 runs in 2 second(s) 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.555 22:15:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:04.555 [2024-10-29 22:15:23.942553] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:04.555 [2024-10-29 22:15:23.942629] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108804 ] 00:08:04.813 [2024-10-29 22:15:24.142885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.813 [2024-10-29 22:15:24.181373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.813 [2024-10-29 22:15:24.240502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.813 [2024-10-29 22:15:24.256671] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:04.813 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.813 INFO: Seed: 2664130154 00:08:04.813 INFO: Loaded 1 modules (387298 inline 8-bit counters): 387298 [0x2c375cc, 0x2c95eae), 00:08:04.813 INFO: Loaded 1 PC tables (387298 PCs): 387298 [0x2c95eb0,0x327ecd0), 00:08:04.813 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:04.813 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.813 #2 INITED exec/s: 0 rss: 66Mb 00:08:04.813 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.813 This may also happen if the target rejected all inputs we tried so far 00:08:04.813 [2024-10-29 22:15:24.322219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.813 [2024-10-29 22:15:24.322252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.813 [2024-10-29 22:15:24.322317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.813 [2024-10-29 22:15:24.322335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.330 NEW_FUNC[1/717]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:05.330 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.330 #19 NEW cov: 12316 ft: 12317 corp: 2/60b lim: 100 exec/s: 0 rss: 74Mb L: 59/59 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:05.330 [2024-10-29 22:15:24.663418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.663506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.663618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.663661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.331 #20 NEW cov: 12429 ft: 12995 corp: 3/114b lim: 100 exec/s: 0 rss: 74Mb L: 54/59 MS: 1 EraseBytes- 00:08:05.331 [2024-10-29 22:15:24.733375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.733408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.733448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.733466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.733524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.733541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.331 #21 NEW cov: 12435 ft: 13673 corp: 4/186b lim: 100 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:08:05.331 [2024-10-29 22:15:24.793544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506099725465744114 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.793578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.793616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.793634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.793690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.793707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.331 #22 NEW cov: 12520 ft: 14022 corp: 5/246b lim: 100 exec/s: 0 rss: 74Mb L: 60/72 MS: 1 InsertByte- 00:08:05.331 [2024-10-29 22:15:24.843652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.843683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.843721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.843737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.331 [2024-10-29 22:15:24.843793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.331 [2024-10-29 22:15:24.843809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.590 #23 NEW cov: 12520 ft: 14117 corp: 6/319b lim: 100 exec/s: 0 rss: 74Mb L: 73/73 MS: 1 InsertByte- 00:08:05.590 [2024-10-29 22:15:24.903768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506099725465744114 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:24.903797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:24.903833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:24.903849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:24.903904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:24.903921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.590 #24 NEW cov: 12520 ft: 14204 corp: 7/379b lim: 100 exec/s: 0 rss: 74Mb L: 60/73 MS: 1 ChangeBit- 00:08:05.590 [2024-10-29 22:15:24.963808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:24.963836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:24.963886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:24.963903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.590 #25 NEW cov: 12520 ft: 14366 corp: 8/438b lim: 100 exec/s: 0 rss: 74Mb L: 59/73 MS: 1 CrossOver- 00:08:05.590 [2024-10-29 22:15:25.003906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17501818227187184370 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:25.003934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:25.003972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:25.003988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.590 #26 NEW cov: 12520 ft: 14419 corp: 9/497b lim: 100 exec/s: 0 rss: 74Mb L: 59/73 MS: 1 ChangeBit- 00:08:05.590 [2024-10-29 22:15:25.064218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506099725465744114 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:25.064246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:25.064282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:25.064302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.590 [2024-10-29 22:15:25.064374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.590 [2024-10-29 22:15:25.064391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.590 #27 NEW cov: 12520 ft: 14485 corp: 10/557b lim: 100 exec/s: 0 rss: 74Mb L: 60/73 MS: 1 ShuffleBytes- 00:08:05.849 [2024-10-29 22:15:25.124544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.124575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.124612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.124628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.124681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.124697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.124751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.124767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.849 #28 NEW cov: 12520 ft: 14912 corp: 11/656b lim: 100 exec/s: 0 rss: 74Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:05.849 [2024-10-29 22:15:25.184528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.184554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.184619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.184635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.184693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2810246167260233727 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.184709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.849 NEW_FUNC[1/1]: 0x1c2e018 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:05.849 #29 NEW cov: 12543 ft: 14959 corp: 12/729b lim: 100 exec/s: 0 rss: 74Mb L: 73/99 MS: 1 ChangeByte- 00:08:05.849 [2024-10-29 22:15:25.224668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.224695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.224740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.224756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.849 [2024-10-29 22:15:25.224827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2810246167260233727 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.849 [2024-10-29 22:15:25.224843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.850 #30 NEW cov: 12543 ft: 15013 corp: 13/802b lim: 100 exec/s: 0 rss: 74Mb L: 73/99 MS: 1 ChangeByte- 00:08:05.850 [2024-10-29 22:15:25.284851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506061302688314098 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.284883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.850 [2024-10-29 22:15:25.284941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.284957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.850 [2024-10-29 22:15:25.285011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.285027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.850 #31 NEW cov: 12543 ft: 15035 corp: 14/862b lim: 100 exec/s: 31 rss: 75Mb L: 60/99 MS: 1 CMP- DE: "\006\000"- 00:08:05.850 [2024-10-29 22:15:25.345025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.345052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.850 [2024-10-29 22:15:25.345116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.345133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.850 [2024-10-29 22:15:25.345189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18385664003544380159 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.850 [2024-10-29 22:15:25.345205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.850 #32 NEW cov: 12543 ft: 15058 corp: 15/936b lim: 100 exec/s: 32 rss: 75Mb L: 74/99 MS: 1 InsertByte- 00:08:06.109 [2024-10-29 22:15:25.385176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.385203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.109 [2024-10-29 22:15:25.385252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.385268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.109 [2024-10-29 22:15:25.385317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.385333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.109 #33 NEW cov: 12543 ft: 15074 corp: 16/1008b lim: 100 exec/s: 33 rss: 75Mb L: 72/99 MS: 1 CopyPart- 00:08:06.109 [2024-10-29 22:15:25.425444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.425472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.109 [2024-10-29 22:15:25.425536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.425553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.109 [2024-10-29 22:15:25.425607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.425625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.109 [2024-10-29 22:15:25.425681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.109 [2024-10-29 22:15:25.425697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.109 #34 NEW cov: 12543 ft: 15113 corp: 17/1104b lim: 100 exec/s: 34 rss: 75Mb L: 96/99 MS: 1 CopyPart- 00:08:06.110 [2024-10-29 22:15:25.465548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.465574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.465644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.465660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.465715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.465732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.465787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.465803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.110 #35 NEW cov: 12543 ft: 15163 corp: 18/1202b lim: 100 exec/s: 35 rss: 75Mb L: 98/99 MS: 1 CopyPart- 00:08:06.110 [2024-10-29 22:15:25.525551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506061302688314098 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.525578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.525640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.525657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.525714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.525730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.110 #36 NEW cov: 12543 ft: 15188 corp: 19/1263b lim: 100 exec/s: 36 rss: 75Mb L: 61/99 MS: 1 InsertByte- 00:08:06.110 [2024-10-29 22:15:25.585759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506099725465744114 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.585786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.585850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.585867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.585921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.585940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.110 #37 NEW cov: 12543 ft: 15196 corp: 20/1323b lim: 100 exec/s: 37 rss: 75Mb L: 60/99 MS: 1 ChangeBit- 00:08:06.110 [2024-10-29 22:15:25.625851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.625878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.625941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.625957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.110 [2024-10-29 22:15:25.626009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.110 [2024-10-29 22:15:25.626024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 #42 NEW cov: 12543 ft: 15242 corp: 21/1386b lim: 100 exec/s: 42 rss: 75Mb L: 63/99 MS: 5 CrossOver-CopyPart-ChangeBit-EraseBytes-CrossOver- 00:08:06.369 [2024-10-29 22:15:25.686000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506099725465744114 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.686026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.686090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.686106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.686163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62067 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.686178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 #43 NEW cov: 12543 ft: 15260 corp: 22/1446b lim: 100 exec/s: 43 rss: 75Mb L: 60/99 MS: 1 ChangeBit- 00:08:06.369 [2024-10-29 22:15:25.726277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.726308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.726365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.726380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.726435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.726450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.726504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.726520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.369 #44 NEW cov: 12543 ft: 15304 corp: 23/1542b lim: 100 exec/s: 44 rss: 75Mb L: 96/99 MS: 1 ShuffleBytes- 00:08:06.369 [2024-10-29 22:15:25.766371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.766399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.766477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.766493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.766548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.766564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.766618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.766634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.369 #45 NEW cov: 12543 ft: 15326 corp: 24/1640b lim: 100 exec/s: 45 rss: 75Mb L: 98/99 MS: 1 ShuffleBytes- 00:08:06.369 [2024-10-29 22:15:25.826551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.826577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.826633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.826647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.826717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.826733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.826787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.826803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.369 #46 NEW cov: 12543 ft: 15336 corp: 25/1736b lim: 100 exec/s: 46 rss: 75Mb L: 96/99 MS: 1 ChangeByte- 00:08:06.369 [2024-10-29 22:15:25.886756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.886783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.886848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.886864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.886918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490592498 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.886933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.369 [2024-10-29 22:15:25.886992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.369 [2024-10-29 22:15:25.887007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.629 #47 NEW cov: 12543 ft: 15400 corp: 26/1834b lim: 100 exec/s: 47 rss: 75Mb L: 98/99 MS: 1 PersAutoDict- DE: "\006\000"- 00:08:06.629 [2024-10-29 22:15:25.946918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.946946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:25.946992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.947008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:25.947061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.947076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:25.947130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.947146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.629 #48 NEW cov: 12543 ft: 15414 corp: 27/1931b lim: 100 exec/s: 48 rss: 75Mb L: 97/99 MS: 1 CopyPart- 00:08:06.629 [2024-10-29 22:15:25.986840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506061302688314098 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.986866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:25.986929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554671 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.986945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:25.987001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:25.987017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.629 #49 NEW cov: 12543 ft: 15416 corp: 28/1992b lim: 100 exec/s: 49 rss: 75Mb L: 61/99 MS: 1 InsertByte- 00:08:06.629 [2024-10-29 22:15:26.026819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.026847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:26.026890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446505479467365106 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.026905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.629 #55 NEW cov: 12543 ft: 15426 corp: 29/2047b lim: 100 exec/s: 55 rss: 75Mb L: 55/99 MS: 1 EraseBytes- 00:08:06.629 [2024-10-29 22:15:26.087026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.087058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:26.087115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17509995350997529330 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.087131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.629 #61 NEW cov: 12543 ft: 15444 corp: 30/2102b lim: 100 exec/s: 61 rss: 75Mb L: 55/99 MS: 1 EraseBytes- 00:08:06.629 [2024-10-29 22:15:26.127244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.127272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:26.127321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.127337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.629 [2024-10-29 22:15:26.127393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18443070549526577151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.629 [2024-10-29 22:15:26.127408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.888 #62 NEW cov: 12543 ft: 15459 corp: 31/2180b lim: 100 exec/s: 62 rss: 75Mb L: 78/99 MS: 1 InsertRepeatedBytes- 00:08:06.888 [2024-10-29 22:15:26.187581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.888 [2024-10-29 22:15:26.187609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.888 [2024-10-29 22:15:26.187675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.888 [2024-10-29 22:15:26.187691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.888 [2024-10-29 22:15:26.187744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073490595839 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.888 [2024-10-29 22:15:26.187760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.888 [2024-10-29 22:15:26.187818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.888 [2024-10-29 22:15:26.187834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.888 #63 NEW cov: 12543 ft: 15467 corp: 32/2268b lim: 100 exec/s: 63 rss: 75Mb L: 88/99 MS: 1 EraseBytes- 00:08:06.888 [2024-10-29 22:15:26.227564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506061302688314098 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.227591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.889 [2024-10-29 22:15:26.227637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.227653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.889 [2024-10-29 22:15:26.227711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17506321826814554354 len:62131 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.227726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.889 #64 NEW cov: 12543 ft: 15495 corp: 33/2328b lim: 100 exec/s: 64 rss: 75Mb L: 60/99 MS: 1 CrossOver- 00:08:06.889 [2024-10-29 22:15:26.267636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.267663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.889 [2024-10-29 22:15:26.267709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321826814554866 len:62195 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.267725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.889 [2024-10-29 22:15:26.267780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069515313151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.889 [2024-10-29 22:15:26.267796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.889 #65 NEW cov: 12543 ft: 15505 corp: 34/2393b lim: 100 exec/s: 32 rss: 75Mb L: 65/99 MS: 1 PersAutoDict- DE: "\006\000"- 00:08:06.889 #65 DONE cov: 12543 ft: 15505 corp: 34/2393b lim: 100 exec/s: 32 rss: 75Mb 00:08:06.889 ###### Recommended dictionary. ###### 00:08:06.889 "\006\000" # Uses: 4 00:08:06.889 ###### End of recommended dictionary. ###### 00:08:06.889 Done 65 runs in 2 second(s) 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:07.147 00:08:07.147 real 1m4.080s 00:08:07.147 user 1m40.278s 00:08:07.147 sys 0m7.441s 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.147 22:15:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:07.147 ************************************ 00:08:07.147 END TEST nvmf_llvm_fuzz 00:08:07.147 ************************************ 00:08:07.147 22:15:26 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:07.147 22:15:26 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:07.147 22:15:26 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:07.147 22:15:26 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.147 22:15:26 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.147 22:15:26 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:07.147 ************************************ 00:08:07.147 START TEST vfio_llvm_fuzz 00:08:07.147 ************************************ 00:08:07.147 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:07.147 * Looking for test storage... 00:08:07.147 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.147 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.147 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.147 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.409 --rc genhtml_branch_coverage=1 00:08:07.409 --rc genhtml_function_coverage=1 00:08:07.409 --rc genhtml_legend=1 00:08:07.409 --rc geninfo_all_blocks=1 00:08:07.409 --rc geninfo_unexecuted_blocks=1 00:08:07.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.409 ' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.409 --rc genhtml_branch_coverage=1 00:08:07.409 --rc genhtml_function_coverage=1 00:08:07.409 --rc genhtml_legend=1 00:08:07.409 --rc geninfo_all_blocks=1 00:08:07.409 --rc geninfo_unexecuted_blocks=1 00:08:07.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.409 ' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.409 --rc genhtml_branch_coverage=1 00:08:07.409 --rc genhtml_function_coverage=1 00:08:07.409 --rc genhtml_legend=1 00:08:07.409 --rc geninfo_all_blocks=1 00:08:07.409 --rc geninfo_unexecuted_blocks=1 00:08:07.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.409 ' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.409 --rc genhtml_branch_coverage=1 00:08:07.409 --rc genhtml_function_coverage=1 00:08:07.409 --rc genhtml_legend=1 00:08:07.409 --rc geninfo_all_blocks=1 00:08:07.409 --rc geninfo_unexecuted_blocks=1 00:08:07.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.409 ' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:07.409 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:07.410 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:07.410 #define SPDK_CONFIG_H 00:08:07.410 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:07.410 #define SPDK_CONFIG_APPS 1 00:08:07.410 #define SPDK_CONFIG_ARCH native 00:08:07.410 #undef SPDK_CONFIG_ASAN 00:08:07.410 #undef SPDK_CONFIG_AVAHI 00:08:07.410 #undef SPDK_CONFIG_CET 00:08:07.410 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:07.410 #define SPDK_CONFIG_COVERAGE 1 00:08:07.410 #define SPDK_CONFIG_CROSS_PREFIX 00:08:07.410 #undef SPDK_CONFIG_CRYPTO 00:08:07.410 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:07.410 #undef SPDK_CONFIG_CUSTOMOCF 00:08:07.410 #undef SPDK_CONFIG_DAOS 00:08:07.410 #define SPDK_CONFIG_DAOS_DIR 00:08:07.410 #define SPDK_CONFIG_DEBUG 1 00:08:07.410 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:07.410 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:07.410 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:07.410 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:07.410 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:07.410 #undef SPDK_CONFIG_DPDK_UADK 00:08:07.410 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:07.410 #define SPDK_CONFIG_EXAMPLES 1 00:08:07.410 #undef SPDK_CONFIG_FC 00:08:07.410 #define SPDK_CONFIG_FC_PATH 00:08:07.410 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:07.410 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:07.410 #define SPDK_CONFIG_FSDEV 1 00:08:07.410 #undef SPDK_CONFIG_FUSE 00:08:07.410 #define SPDK_CONFIG_FUZZER 1 00:08:07.411 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:07.411 #undef SPDK_CONFIG_GOLANG 00:08:07.411 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:07.411 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:07.411 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:07.411 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:07.411 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:07.411 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:07.411 #undef SPDK_CONFIG_HAVE_LZ4 00:08:07.411 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:07.411 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:07.411 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:07.411 #define SPDK_CONFIG_IDXD 1 00:08:07.411 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:07.411 #undef SPDK_CONFIG_IPSEC_MB 00:08:07.411 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:07.411 #define SPDK_CONFIG_ISAL 1 00:08:07.411 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:07.411 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:07.411 #define SPDK_CONFIG_LIBDIR 00:08:07.411 #undef SPDK_CONFIG_LTO 00:08:07.411 #define SPDK_CONFIG_MAX_LCORES 128 00:08:07.411 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:07.411 #define SPDK_CONFIG_NVME_CUSE 1 00:08:07.411 #undef SPDK_CONFIG_OCF 00:08:07.411 #define SPDK_CONFIG_OCF_PATH 00:08:07.411 #define SPDK_CONFIG_OPENSSL_PATH 00:08:07.411 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:07.411 #define SPDK_CONFIG_PGO_DIR 00:08:07.411 #undef SPDK_CONFIG_PGO_USE 00:08:07.411 #define SPDK_CONFIG_PREFIX /usr/local 00:08:07.411 #undef SPDK_CONFIG_RAID5F 00:08:07.411 #undef SPDK_CONFIG_RBD 00:08:07.411 #define SPDK_CONFIG_RDMA 1 00:08:07.411 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:07.411 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:07.411 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:07.411 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:07.411 #undef SPDK_CONFIG_SHARED 00:08:07.411 #undef SPDK_CONFIG_SMA 00:08:07.411 #define SPDK_CONFIG_TESTS 1 00:08:07.411 #undef SPDK_CONFIG_TSAN 00:08:07.411 #define SPDK_CONFIG_UBLK 1 00:08:07.411 #define SPDK_CONFIG_UBSAN 1 00:08:07.411 #undef SPDK_CONFIG_UNIT_TESTS 00:08:07.411 #undef SPDK_CONFIG_URING 00:08:07.411 #define SPDK_CONFIG_URING_PATH 00:08:07.411 #undef SPDK_CONFIG_URING_ZNS 00:08:07.411 #undef SPDK_CONFIG_USDT 00:08:07.411 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:07.411 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:07.411 #define SPDK_CONFIG_VFIO_USER 1 00:08:07.411 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:07.411 #define SPDK_CONFIG_VHOST 1 00:08:07.411 #define SPDK_CONFIG_VIRTIO 1 00:08:07.411 #undef SPDK_CONFIG_VTUNE 00:08:07.411 #define SPDK_CONFIG_VTUNE_DIR 00:08:07.411 #define SPDK_CONFIG_WERROR 1 00:08:07.411 #define SPDK_CONFIG_WPDK_DIR 00:08:07.411 #undef SPDK_CONFIG_XNVME 00:08:07.411 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:07.411 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.412 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3109193 ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3109193 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.nC51wm 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.nC51wm/tests/vfio /tmp/spdk.nC51wm 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86670983168 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500360192 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7829377024 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245414400 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250178048 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894340096 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5734400 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249584128 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250182144 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=598016 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450020864 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450033152 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:07.413 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:08:07.413 * Looking for test storage... 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86670983168 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10043969536 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.414 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.414 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.672 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.672 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.673 --rc genhtml_branch_coverage=1 00:08:07.673 --rc genhtml_function_coverage=1 00:08:07.673 --rc genhtml_legend=1 00:08:07.673 --rc geninfo_all_blocks=1 00:08:07.673 --rc geninfo_unexecuted_blocks=1 00:08:07.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.673 ' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.673 --rc genhtml_branch_coverage=1 00:08:07.673 --rc genhtml_function_coverage=1 00:08:07.673 --rc genhtml_legend=1 00:08:07.673 --rc geninfo_all_blocks=1 00:08:07.673 --rc geninfo_unexecuted_blocks=1 00:08:07.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.673 ' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.673 --rc genhtml_branch_coverage=1 00:08:07.673 --rc genhtml_function_coverage=1 00:08:07.673 --rc genhtml_legend=1 00:08:07.673 --rc geninfo_all_blocks=1 00:08:07.673 --rc geninfo_unexecuted_blocks=1 00:08:07.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.673 ' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.673 --rc genhtml_branch_coverage=1 00:08:07.673 --rc genhtml_function_coverage=1 00:08:07.673 --rc genhtml_legend=1 00:08:07.673 --rc geninfo_all_blocks=1 00:08:07.673 --rc geninfo_unexecuted_blocks=1 00:08:07.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:07.673 ' 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:07.673 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:07.673 22:15:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:07.673 [2024-10-29 22:15:27.015220] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:07.673 [2024-10-29 22:15:27.015291] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109251 ] 00:08:07.673 [2024-10-29 22:15:27.110721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.673 [2024-10-29 22:15:27.156096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.931 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.931 INFO: Seed: 1445191575 00:08:07.931 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:07.931 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:07.931 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:07.931 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.931 #2 INITED exec/s: 0 rss: 69Mb 00:08:07.931 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.931 This may also happen if the target rejected all inputs we tried so far 00:08:07.931 [2024-10-29 22:15:27.410254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:08.445 NEW_FUNC[1/672]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:08.445 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:08.445 #8 NEW cov: 11164 ft: 11003 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:08.702 #22 NEW cov: 11188 ft: 14414 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 ChangeBit-InsertRepeatedBytes-ShuffleBytes-CopyPart- 00:08:08.959 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:08.959 #23 NEW cov: 11205 ft: 15598 corp: 4/19b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:08:09.217 #26 NEW cov: 11205 ft: 16563 corp: 5/25b lim: 6 exec/s: 26 rss: 77Mb L: 6/6 MS: 3 EraseBytes-CopyPart-InsertByte- 00:08:09.217 #36 NEW cov: 11205 ft: 16620 corp: 6/31b lim: 6 exec/s: 36 rss: 77Mb L: 6/6 MS: 5 EraseBytes-ShuffleBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:09.474 #42 NEW cov: 11205 ft: 16964 corp: 7/37b lim: 6 exec/s: 42 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:08:09.732 #43 NEW cov: 11205 ft: 17170 corp: 8/43b lim: 6 exec/s: 43 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:08:09.989 #46 NEW cov: 11212 ft: 17395 corp: 9/49b lim: 6 exec/s: 46 rss: 77Mb L: 6/6 MS: 3 ShuffleBytes-CopyPart-CrossOver- 00:08:10.248 #52 NEW cov: 11212 ft: 17559 corp: 10/55b lim: 6 exec/s: 26 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:08:10.248 #52 DONE cov: 11212 ft: 17559 corp: 10/55b lim: 6 exec/s: 26 rss: 77Mb 00:08:10.248 Done 52 runs in 2 second(s) 00:08:10.248 [2024-10-29 22:15:29.540503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:10.248 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:10.248 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:10.507 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.507 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:10.507 22:15:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:10.507 [2024-10-29 22:15:29.805812] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:10.507 [2024-10-29 22:15:29.805895] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109615 ] 00:08:10.507 [2024-10-29 22:15:29.901940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.507 [2024-10-29 22:15:29.946323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.765 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.765 INFO: Seed: 4236203864 00:08:10.765 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:10.765 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:10.765 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:10.765 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.765 #2 INITED exec/s: 0 rss: 68Mb 00:08:10.765 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.765 This may also happen if the target rejected all inputs we tried so far 00:08:10.765 [2024-10-29 22:15:30.198177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:10.765 [2024-10-29 22:15:30.265544] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:10.765 [2024-10-29 22:15:30.265570] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:10.765 [2024-10-29 22:15:30.265593] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:11.281 NEW_FUNC[1/674]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:11.281 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:11.281 #51 NEW cov: 11161 ft: 11107 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 CopyPart-InsertByte-CopyPart-InsertByte- 00:08:11.281 [2024-10-29 22:15:30.759658] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:11.281 [2024-10-29 22:15:30.759698] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:11.281 [2024-10-29 22:15:30.759717] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:11.538 #57 NEW cov: 11175 ft: 14255 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:08:11.538 [2024-10-29 22:15:30.950365] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:11.538 [2024-10-29 22:15:30.950389] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:11.538 [2024-10-29 22:15:30.950422] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:11.797 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:11.797 #73 NEW cov: 11195 ft: 15764 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:08:11.797 [2024-10-29 22:15:31.163262] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:11.797 [2024-10-29 22:15:31.163285] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:11.797 [2024-10-29 22:15:31.163310] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:11.797 #74 NEW cov: 11195 ft: 16756 corp: 5/17b lim: 4 exec/s: 74 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:12.055 [2024-10-29 22:15:31.371557] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:12.055 [2024-10-29 22:15:31.371580] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:12.055 [2024-10-29 22:15:31.371598] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:12.055 #75 NEW cov: 11195 ft: 17165 corp: 6/21b lim: 4 exec/s: 75 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:08:12.055 [2024-10-29 22:15:31.568470] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:12.055 [2024-10-29 22:15:31.568494] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:12.055 [2024-10-29 22:15:31.568512] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:12.314 #76 NEW cov: 11195 ft: 17313 corp: 7/25b lim: 4 exec/s: 76 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:08:12.314 [2024-10-29 22:15:31.765237] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:12.314 [2024-10-29 22:15:31.765260] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:12.314 [2024-10-29 22:15:31.765277] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:12.572 #82 NEW cov: 11195 ft: 17802 corp: 8/29b lim: 4 exec/s: 82 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:08:12.572 [2024-10-29 22:15:31.957357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:12.572 [2024-10-29 22:15:31.957379] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:12.572 [2024-10-29 22:15:31.957396] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:12.572 #83 NEW cov: 11202 ft: 17843 corp: 9/33b lim: 4 exec/s: 83 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:08:12.831 [2024-10-29 22:15:32.159305] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:12.831 [2024-10-29 22:15:32.159327] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:12.831 [2024-10-29 22:15:32.159345] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:12.831 #84 NEW cov: 11202 ft: 18052 corp: 10/37b lim: 4 exec/s: 42 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:08:12.831 #84 DONE cov: 11202 ft: 18052 corp: 10/37b lim: 4 exec/s: 42 rss: 77Mb 00:08:12.831 Done 84 runs in 2 second(s) 00:08:12.831 [2024-10-29 22:15:32.292501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:13.090 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:13.090 22:15:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:13.090 [2024-10-29 22:15:32.560790] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:13.090 [2024-10-29 22:15:32.560871] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109979 ] 00:08:13.349 [2024-10-29 22:15:32.656524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.349 [2024-10-29 22:15:32.700787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.607 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.607 INFO: Seed: 2696231499 00:08:13.607 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:13.607 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:13.607 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:13.607 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.607 #2 INITED exec/s: 0 rss: 68Mb 00:08:13.607 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.607 This may also happen if the target rejected all inputs we tried so far 00:08:13.607 [2024-10-29 22:15:32.944983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:13.607 [2024-10-29 22:15:33.017053] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.123 NEW_FUNC[1/673]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:14.123 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:14.123 #7 NEW cov: 11144 ft: 11109 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 CrossOver-InsertRepeatedBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes- 00:08:14.123 [2024-10-29 22:15:33.524579] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.123 #11 NEW cov: 11158 ft: 14218 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 4 InsertRepeatedBytes-ChangeByte-CrossOver-InsertByte- 00:08:14.382 [2024-10-29 22:15:33.712970] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.382 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:14.382 #12 NEW cov: 11178 ft: 15793 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:08:14.641 [2024-10-29 22:15:33.907907] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.641 #17 NEW cov: 11178 ft: 16282 corp: 5/33b lim: 8 exec/s: 17 rss: 76Mb L: 8/8 MS: 5 CrossOver-ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:14.641 [2024-10-29 22:15:34.092470] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.899 #18 NEW cov: 11178 ft: 16519 corp: 6/41b lim: 8 exec/s: 18 rss: 78Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:14.899 [2024-10-29 22:15:34.278706] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:14.899 #19 NEW cov: 11178 ft: 16742 corp: 7/49b lim: 8 exec/s: 19 rss: 78Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:15.158 [2024-10-29 22:15:34.460924] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:15.158 #20 NEW cov: 11178 ft: 16933 corp: 8/57b lim: 8 exec/s: 20 rss: 78Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:15.158 [2024-10-29 22:15:34.645269] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:15.417 #26 NEW cov: 11185 ft: 17026 corp: 9/65b lim: 8 exec/s: 26 rss: 78Mb L: 8/8 MS: 1 CMP- DE: "\377\017"- 00:08:15.417 [2024-10-29 22:15:34.834228] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:15.676 #27 NEW cov: 11185 ft: 17073 corp: 10/73b lim: 8 exec/s: 13 rss: 78Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:15.676 #27 DONE cov: 11185 ft: 17073 corp: 10/73b lim: 8 exec/s: 13 rss: 78Mb 00:08:15.676 ###### Recommended dictionary. ###### 00:08:15.676 "\377\017" # Uses: 0 00:08:15.676 ###### End of recommended dictionary. ###### 00:08:15.676 Done 27 runs in 2 second(s) 00:08:15.676 [2024-10-29 22:15:34.964497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:15.676 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:15.676 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:15.935 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.935 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:15.935 22:15:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:15.935 [2024-10-29 22:15:35.231836] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:15.935 [2024-10-29 22:15:35.231912] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110333 ] 00:08:15.935 [2024-10-29 22:15:35.326973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.935 [2024-10-29 22:15:35.371758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.193 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.193 INFO: Seed: 1071259295 00:08:16.193 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:16.193 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:16.193 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:16.193 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.193 #2 INITED exec/s: 0 rss: 68Mb 00:08:16.193 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.193 This may also happen if the target rejected all inputs we tried so far 00:08:16.193 [2024-10-29 22:15:35.628722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:16.710 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:16.710 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:16.710 #97 NEW cov: 11143 ft: 10805 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 InsertByte-InsertRepeatedBytes-ShuffleBytes-CopyPart-InsertByte- 00:08:16.968 NEW_FUNC[1/1]: 0x138baa8 in from_le32 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/endian.h:100 00:08:16.968 #98 NEW cov: 11169 ft: 14551 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeASCIIInt- 00:08:17.226 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:17.226 #119 NEW cov: 11186 ft: 15566 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:08:17.226 #123 NEW cov: 11186 ft: 16404 corp: 5/129b lim: 32 exec/s: 123 rss: 77Mb L: 32/32 MS: 4 CrossOver-ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:17.484 #129 NEW cov: 11186 ft: 17108 corp: 6/161b lim: 32 exec/s: 129 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:17.743 #130 NEW cov: 11186 ft: 17311 corp: 7/193b lim: 32 exec/s: 130 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:08:17.743 #136 NEW cov: 11186 ft: 17650 corp: 8/225b lim: 32 exec/s: 136 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:18.001 #137 NEW cov: 11193 ft: 17794 corp: 9/257b lim: 32 exec/s: 137 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:08:18.259 #153 NEW cov: 11193 ft: 17859 corp: 10/289b lim: 32 exec/s: 153 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:18.260 #154 NEW cov: 11193 ft: 17968 corp: 11/321b lim: 32 exec/s: 77 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:08:18.260 #154 DONE cov: 11193 ft: 17968 corp: 11/321b lim: 32 exec/s: 77 rss: 77Mb 00:08:18.260 Done 154 runs in 2 second(s) 00:08:18.260 [2024-10-29 22:15:37.772503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:18.519 22:15:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:18.519 22:15:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:18.519 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:18.519 22:15:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:18.519 22:15:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:18.519 22:15:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:18.519 [2024-10-29 22:15:38.039624] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:18.519 [2024-10-29 22:15:38.039710] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110696 ] 00:08:18.778 [2024-10-29 22:15:38.136571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.778 [2024-10-29 22:15:38.181434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.036 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.036 INFO: Seed: 3887244624 00:08:19.036 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:19.036 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:19.036 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:19.036 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.036 #2 INITED exec/s: 0 rss: 68Mb 00:08:19.036 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.036 This may also happen if the target rejected all inputs we tried so far 00:08:19.036 [2024-10-29 22:15:38.434814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:19.552 NEW_FUNC[1/672]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:19.552 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:19.552 #117 NEW cov: 11153 ft: 10832 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 InsertRepeatedBytes-ChangeBinInt-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:19.810 NEW_FUNC[1/1]: 0x1f416d8 in spdk_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1282 00:08:19.810 #123 NEW cov: 11171 ft: 14047 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:20.069 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:20.069 #134 NEW cov: 11188 ft: 15574 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:20.069 #135 NEW cov: 11188 ft: 16273 corp: 5/129b lim: 32 exec/s: 135 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:20.330 #136 NEW cov: 11188 ft: 17021 corp: 6/161b lim: 32 exec/s: 136 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:20.589 #137 NEW cov: 11188 ft: 17150 corp: 7/193b lim: 32 exec/s: 137 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:08:20.589 #139 NEW cov: 11188 ft: 17328 corp: 8/225b lim: 32 exec/s: 139 rss: 77Mb L: 32/32 MS: 2 EraseBytes-InsertByte- 00:08:20.847 #145 NEW cov: 11195 ft: 17740 corp: 9/257b lim: 32 exec/s: 145 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:21.106 #146 NEW cov: 11195 ft: 17851 corp: 10/289b lim: 32 exec/s: 73 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:21.106 #146 DONE cov: 11195 ft: 17851 corp: 10/289b lim: 32 exec/s: 73 rss: 77Mb 00:08:21.107 Done 146 runs in 2 second(s) 00:08:21.107 [2024-10-29 22:15:40.521509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:21.366 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:21.367 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:21.367 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:21.367 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:21.367 22:15:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:21.367 [2024-10-29 22:15:40.791424] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:21.367 [2024-10-29 22:15:40.791502] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111078 ] 00:08:21.626 [2024-10-29 22:15:40.890609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.626 [2024-10-29 22:15:40.936276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.626 INFO: Running with entropic power schedule (0xFF, 100). 00:08:21.626 INFO: Seed: 2339261092 00:08:21.626 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:21.626 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:21.626 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:21.626 INFO: A corpus is not provided, starting from an empty corpus 00:08:21.626 #2 INITED exec/s: 0 rss: 68Mb 00:08:21.626 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:21.626 This may also happen if the target rejected all inputs we tried so far 00:08:21.885 [2024-10-29 22:15:41.180000] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:21.885 [2024-10-29 22:15:41.202381] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:21.885 [2024-10-29 22:15:41.202428] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.144 NEW_FUNC[1/674]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:22.144 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:22.144 #32 NEW cov: 11163 ft: 10839 corp: 2/14b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 5 InsertRepeatedBytes-InsertRepeatedBytes-CMP-ShuffleBytes-CrossOver- DE: "\001\002"- 00:08:22.144 [2024-10-29 22:15:41.640458] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.144 [2024-10-29 22:15:41.640505] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.403 #33 NEW cov: 11177 ft: 13893 corp: 3/27b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 PersAutoDict- DE: "\001\002"- 00:08:22.404 [2024-10-29 22:15:41.764696] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.404 [2024-10-29 22:15:41.764730] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.404 #34 NEW cov: 11177 ft: 14838 corp: 4/40b lim: 13 exec/s: 0 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:08:22.404 [2024-10-29 22:15:41.887836] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.404 [2024-10-29 22:15:41.887870] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.663 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:22.663 #35 NEW cov: 11194 ft: 15212 corp: 5/53b lim: 13 exec/s: 0 rss: 77Mb L: 13/13 MS: 1 CrossOver- 00:08:22.663 [2024-10-29 22:15:42.001978] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.663 [2024-10-29 22:15:42.002011] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.663 #36 NEW cov: 11197 ft: 16387 corp: 6/66b lim: 13 exec/s: 0 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:22.663 [2024-10-29 22:15:42.127267] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.663 [2024-10-29 22:15:42.127312] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.923 #37 NEW cov: 11197 ft: 16941 corp: 7/79b lim: 13 exec/s: 37 rss: 77Mb L: 13/13 MS: 1 CrossOver- 00:08:22.923 [2024-10-29 22:15:42.242371] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.923 [2024-10-29 22:15:42.242410] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.923 #38 NEW cov: 11197 ft: 17595 corp: 8/92b lim: 13 exec/s: 38 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:22.923 [2024-10-29 22:15:42.366463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:22.923 [2024-10-29 22:15:42.366496] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:22.923 #39 NEW cov: 11197 ft: 17873 corp: 9/105b lim: 13 exec/s: 39 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:08:23.182 [2024-10-29 22:15:42.489583] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.182 [2024-10-29 22:15:42.489616] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.182 #59 NEW cov: 11197 ft: 18030 corp: 10/118b lim: 13 exec/s: 59 rss: 77Mb L: 13/13 MS: 5 EraseBytes-ChangeASCIIInt-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:23.182 [2024-10-29 22:15:42.612624] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.182 [2024-10-29 22:15:42.612657] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.182 #60 NEW cov: 11197 ft: 18248 corp: 11/131b lim: 13 exec/s: 60 rss: 77Mb L: 13/13 MS: 1 ChangeASCIIInt- 00:08:23.442 [2024-10-29 22:15:42.735797] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.442 [2024-10-29 22:15:42.735830] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.442 #61 NEW cov: 11197 ft: 18290 corp: 12/144b lim: 13 exec/s: 61 rss: 77Mb L: 13/13 MS: 1 PersAutoDict- DE: "\001\002"- 00:08:23.442 [2024-10-29 22:15:42.848831] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.442 [2024-10-29 22:15:42.848864] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.442 #62 NEW cov: 11197 ft: 18468 corp: 13/157b lim: 13 exec/s: 62 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:23.442 [2024-10-29 22:15:42.961790] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.442 [2024-10-29 22:15:42.961825] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.701 #63 NEW cov: 11204 ft: 18813 corp: 14/170b lim: 13 exec/s: 63 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:08:23.701 [2024-10-29 22:15:43.084933] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:23.701 [2024-10-29 22:15:43.084965] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:23.701 #64 pulse cov: 11204 ft: 18872 corp: 14/170b lim: 13 exec/s: 32 rss: 77Mb 00:08:23.701 #64 NEW cov: 11204 ft: 18872 corp: 15/183b lim: 13 exec/s: 32 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:08:23.701 #64 DONE cov: 11204 ft: 18872 corp: 15/183b lim: 13 exec/s: 32 rss: 77Mb 00:08:23.701 ###### Recommended dictionary. ###### 00:08:23.701 "\001\002" # Uses: 3 00:08:23.701 ###### End of recommended dictionary. ###### 00:08:23.701 Done 64 runs in 2 second(s) 00:08:23.701 [2024-10-29 22:15:43.179495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:23.960 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:23.960 22:15:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:23.960 [2024-10-29 22:15:43.449708] Starting SPDK v25.01-pre git sha1 344e7bdd4 / DPDK 24.03.0 initialization... 00:08:23.960 [2024-10-29 22:15:43.449775] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111443 ] 00:08:24.220 [2024-10-29 22:15:43.543182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.220 [2024-10-29 22:15:43.591340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.480 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.480 INFO: Seed: 700297878 00:08:24.480 INFO: Loaded 1 modules (384534 inline 8-bit counters): 384534 [0x2bf8e0c, 0x2c56c22), 00:08:24.480 INFO: Loaded 1 PC tables (384534 PCs): 384534 [0x2c56c28,0x3234d88), 00:08:24.480 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:24.480 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.480 #2 INITED exec/s: 0 rss: 68Mb 00:08:24.480 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.480 This may also happen if the target rejected all inputs we tried so far 00:08:24.480 [2024-10-29 22:15:43.833815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:24.480 [2024-10-29 22:15:43.908055] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:24.480 [2024-10-29 22:15:43.908091] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:24.998 NEW_FUNC[1/674]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:24.998 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:24.998 #47 NEW cov: 11155 ft: 11078 corp: 2/10b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 5 CrossOver-ChangeBit-CMP-CrossOver-CopyPart- DE: "\377\377\377\377"- 00:08:24.998 [2024-10-29 22:15:44.409023] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:24.998 [2024-10-29 22:15:44.409067] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:24.998 #48 NEW cov: 11169 ft: 14516 corp: 3/19b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:25.257 [2024-10-29 22:15:44.587038] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:25.257 [2024-10-29 22:15:44.587072] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:25.257 NEW_FUNC[1/1]: 0x1bfa468 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:25.257 #49 NEW cov: 11186 ft: 15612 corp: 4/28b lim: 9 exec/s: 0 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:08:25.516 [2024-10-29 22:15:44.782390] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:25.516 [2024-10-29 22:15:44.782426] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:25.516 #55 NEW cov: 11189 ft: 16724 corp: 5/37b lim: 9 exec/s: 55 rss: 77Mb L: 9/9 MS: 1 CrossOver- 00:08:25.516 [2024-10-29 22:15:44.985087] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:25.516 [2024-10-29 22:15:44.985121] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:25.775 #56 NEW cov: 11189 ft: 17289 corp: 6/46b lim: 9 exec/s: 56 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:25.775 [2024-10-29 22:15:45.180689] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:25.775 [2024-10-29 22:15:45.180719] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:25.775 #58 NEW cov: 11189 ft: 17392 corp: 7/55b lim: 9 exec/s: 58 rss: 77Mb L: 9/9 MS: 2 EraseBytes-CopyPart- 00:08:26.035 [2024-10-29 22:15:45.363815] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:26.035 [2024-10-29 22:15:45.363844] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:26.035 #59 NEW cov: 11189 ft: 17622 corp: 8/64b lim: 9 exec/s: 59 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:26.035 [2024-10-29 22:15:45.545184] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:26.035 [2024-10-29 22:15:45.545213] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:26.294 #70 NEW cov: 11196 ft: 17959 corp: 9/73b lim: 9 exec/s: 70 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:08:26.294 [2024-10-29 22:15:45.726284] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:26.294 [2024-10-29 22:15:45.726321] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:26.553 #71 NEW cov: 11196 ft: 18002 corp: 10/82b lim: 9 exec/s: 35 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:26.553 #71 DONE cov: 11196 ft: 18002 corp: 10/82b lim: 9 exec/s: 35 rss: 77Mb 00:08:26.553 ###### Recommended dictionary. ###### 00:08:26.553 "\377\377\377\377" # Uses: 2 00:08:26.553 ###### End of recommended dictionary. ###### 00:08:26.553 Done 71 runs in 2 second(s) 00:08:26.553 [2024-10-29 22:15:45.853508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:26.553 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:26.812 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:26.812 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:26.812 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:26.812 00:08:26.812 real 0m19.574s 00:08:26.812 user 0m27.295s 00:08:26.812 sys 0m1.990s 00:08:26.812 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.812 22:15:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:26.812 ************************************ 00:08:26.812 END TEST vfio_llvm_fuzz 00:08:26.812 ************************************ 00:08:26.812 00:08:26.812 real 1m24.022s 00:08:26.812 user 2m7.744s 00:08:26.812 sys 0m9.660s 00:08:26.812 22:15:46 llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.812 22:15:46 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:26.812 ************************************ 00:08:26.812 END TEST llvm_fuzz 00:08:26.812 ************************************ 00:08:26.812 22:15:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:08:26.812 22:15:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:08:26.812 22:15:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:08:26.812 22:15:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.812 22:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:26.812 22:15:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:08:26.812 22:15:46 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:08:26.812 22:15:46 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:08:26.812 22:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:32.090 INFO: APP EXITING 00:08:32.090 INFO: killing all VMs 00:08:32.090 INFO: killing vhost app 00:08:32.090 INFO: EXIT DONE 00:08:34.632 Waiting for block devices as requested 00:08:34.632 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:34.632 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:34.632 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:34.632 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:34.632 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:34.892 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:34.892 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:34.892 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:35.151 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:35.151 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:35.151 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:35.411 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:35.411 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:35.411 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:35.411 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:35.670 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:35.670 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:38.965 Cleaning 00:08:38.966 Removing: /dev/shm/spdk_tgt_trace.pid3089301 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3086955 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3088080 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3089301 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3089775 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3090548 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3090583 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3091394 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3091511 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3091855 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3092096 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3092329 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3092579 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3092815 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3093015 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3093210 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3093443 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3094022 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3096535 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3096740 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3096940 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3096949 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3097339 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3097382 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3097894 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3097905 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3098120 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3098282 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3098442 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3098512 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3098963 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3099162 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3099331 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3099451 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3100024 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3100385 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3100739 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3101098 00:08:38.966 Removing: /var/run/dpdk/spdk_pid3101451 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3101804 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3102096 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3102393 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3102729 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3103080 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3103442 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3103799 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3104159 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3104516 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3104875 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3105302 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3105717 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3106374 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3106681 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3107018 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3107381 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3107738 00:08:39.225 Removing: /var/run/dpdk/spdk_pid3108094 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3108448 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3108804 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3109251 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3109615 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3109979 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3110333 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3110696 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3111078 00:08:39.226 Removing: /var/run/dpdk/spdk_pid3111443 00:08:39.226 Clean 00:08:39.226 22:15:58 -- common/autotest_common.sh@1451 -- # return 0 00:08:39.226 22:15:58 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:08:39.226 22:15:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.226 22:15:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.226 22:15:58 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:08:39.226 22:15:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.226 22:15:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.484 22:15:58 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:39.484 22:15:58 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:39.484 22:15:58 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:39.484 22:15:58 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:08:39.484 22:15:58 -- spdk/autotest.sh@394 -- # hostname 00:08:39.484 22:15:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:39.744 geninfo: WARNING: invalid characters removed from testname! 00:08:43.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:49.617 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:52.914 22:16:11 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:01.040 22:16:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:06.324 22:16:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:11.900 22:16:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:17.180 22:16:35 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:22.460 22:16:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:27.737 22:16:46 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:27.737 22:16:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:27.737 22:16:46 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:27.737 22:16:46 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:27.737 22:16:46 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:27.737 22:16:46 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:27.737 + [[ -n 2988536 ]] 00:09:27.737 + sudo kill 2988536 00:09:27.748 [Pipeline] } 00:09:27.764 [Pipeline] // stage 00:09:27.769 [Pipeline] } 00:09:27.784 [Pipeline] // timeout 00:09:27.789 [Pipeline] } 00:09:27.803 [Pipeline] // catchError 00:09:27.809 [Pipeline] } 00:09:27.826 [Pipeline] // wrap 00:09:27.832 [Pipeline] } 00:09:27.846 [Pipeline] // catchError 00:09:27.856 [Pipeline] stage 00:09:27.858 [Pipeline] { (Epilogue) 00:09:27.872 [Pipeline] catchError 00:09:27.874 [Pipeline] { 00:09:27.888 [Pipeline] echo 00:09:27.889 Cleanup processes 00:09:27.896 [Pipeline] sh 00:09:28.184 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:28.184 3117717 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:28.199 [Pipeline] sh 00:09:28.486 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:28.486 ++ grep -v 'sudo pgrep' 00:09:28.486 ++ awk '{print $1}' 00:09:28.486 + sudo kill -9 00:09:28.486 + true 00:09:28.499 [Pipeline] sh 00:09:28.783 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:41.016 [Pipeline] sh 00:09:41.304 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:41.304 Artifacts sizes are good 00:09:41.320 [Pipeline] archiveArtifacts 00:09:41.327 Archiving artifacts 00:09:41.471 [Pipeline] sh 00:09:41.759 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:41.774 [Pipeline] cleanWs 00:09:41.785 [WS-CLEANUP] Deleting project workspace... 00:09:41.785 [WS-CLEANUP] Deferred wipeout is used... 00:09:41.792 [WS-CLEANUP] done 00:09:41.794 [Pipeline] } 00:09:41.810 [Pipeline] // catchError 00:09:41.823 [Pipeline] sh 00:09:42.107 + logger -p user.info -t JENKINS-CI 00:09:42.117 [Pipeline] } 00:09:42.130 [Pipeline] // stage 00:09:42.135 [Pipeline] } 00:09:42.148 [Pipeline] // node 00:09:42.154 [Pipeline] End of Pipeline 00:09:42.190 Finished: SUCCESS